Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
 

Amazon AWS Certified Solutions Architect - Professional SAP-C02 Exam Practice Questions

The questions for AWS Certified Solutions Architect - Professional SAP-C02 were last updated on Aug. 26, 2024.
  • Viewing page 1 out of 53 pages.
  • Viewing questions 1-10 out of 529 questions
Disclaimers:
  • - ExamTopics website is not related to, affiliated with, endorsed or authorized by Amazon.
  • - Trademarks, certification & product names are used for reference only and belong to Amazon.

Topic 1 - Exam A

Question #1 Topic 1

A company needs to architect a hybrid DNS solution. This solution will use an Amazon Route 53 private hosted zone for the domain cloud.example.com for the resources stored within VPCs.
The company has the following DNS resolution requirements:
On-premises systems should be able to resolve and connect to cloud.example.com.
All VPCs should be able to resolve cloud.example.com.
There is already an AWS Direct Connect connection between the on-premises corporate network and AWS Transit Gateway.
Which architecture should the company use to meet these requirements with the HIGHEST performance?

  • A. Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.
  • B. Associate the private hosted zone to all the VPCs. Deploy an Amazon EC2 conditional forwarder in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the conditional forwarder.
  • C. Associate the private hosted zone to the shared services VPCreate a Route 53 outbound resolver in the shared services VPAttach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the outbound resolver.
  • D. Associate the private hosted zone to the shared services VPC. Create a Route 53 inbound resolver in the shared services VPC. Attach the shared services VPC to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (89%)
11%

oooiihooo3
Highly Voted 6 days, 7 hours ago
Selected Answer: A
A is accurate answer . Ref- https://docs-aws.com/blogs/networking-and-content-delivery/
upvoted 55 times
...
robertohyena
Highly Voted 1 year, 8 months ago
A. Correct answer. Source: https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/ NOT B. EC2 conditional forwarder will not meet Highest performance requirement. NOT C. Missing: Need to associate private hosted zone to all VPC. "All VPC’s will need to associate their private hosted zones to all other VPC’s if required to." Source: https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/ NOT D. Missing: Need to associate private hosted zone to all VPC. "All VPC’s will need to associate their private hosted zones to all other VPC’s if required to." Source: https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/
upvoted 53 times
awsylum
6 months ago
In your link, you missed this sentence: "The most reliable, performant and low-cost approach is to share and associate private hosted zones directly to all VPCs that need them." You share the PHZ via the Shared Services VPC. You use the .2 DNS Resolver Address in each VPC to connect to the PHZ in the shared services VPC for domain resolution.
upvoted 1 times
alexkro
5 months ago
You forgot an additional condition mentioned in the question: "All VPCs should be able to resolve cloud.example.com." Nobody said there are only shared VPCs there.
upvoted 1 times
...
...
...
dofav29081
Most Recent 1 day, 5 hours ago
The answers is correct exam still valid, took it today and all thanks: https://bit.ly/AWSCertifiedProfessional
upvoted 1 times
...
jeniiiiifsss
2 days, 16 hours ago
Today i passed the AWS Certified Solutions Architect - Professional - awsdumps.com feeling great and happy, very useful dumps, highy recommended, pls read the discussions below each question
upvoted 1 times
...
onlyvimal2103
1 week, 1 day ago
Inbound resolver + private zone
upvoted 1 times
...
niroop893
3 weeks, 4 days ago
`{<script>alert(1)</script>}`
upvoted 1 times
...
niroop893
3 weeks, 5 days ago
<a>yes</a>
upvoted 1 times
niroop893
3 weeks, 5 days ago
</div>//<body onselectstart=alert(1)>hello
upvoted 1 times
...
...
buiquangbk90
1 month ago
Correct the answer: A
upvoted 1 times
...
Amazon_Dumps_Com_Web
1 month, 1 week ago
Selected Answer: A
A is still valid answer
upvoted 13 times
...
Helpnosense
2 months, 1 week ago
Selected Answer: A
The 2nd requirement in the question is "All VPCs should be able to resolve cloud.example.com." So the answer is A, not D which is only one VPC not all VPCs.
upvoted 1 times
...
TonytheTiger
2 months, 1 week ago
Pass the exam with 822 last week, spend 3 month studying and when over all the questions three times and researched all the discussion answers. 90% of question came from here and I saw like 5 new questions
upvoted 4 times
TonytheTiger
2 months, 1 week ago
And thank you to all the contributors comments to validate the correct answers for these questions. Lets keep working together to improve our careers opportunities.
upvoted 2 times
...
...
AmazonExams
3 months ago
This Answer is correct
upvoted 38 times
...
hahaha1
3 months ago
passed the exam today with score 836, 85% to 90% questions from this dump, new questions are easy though.
upvoted 1 times
...
Aanand
3 months ago
does anyone have pro account ?
upvoted 1 times
...
higashikumi
3 months, 1 week ago
Selected Answer: A
To achieve the highest performance hybrid DNS solution, the company should associate a Route 53 private hosted zone with "cloud.example.com" to all VPCs, then create a Route 53 inbound resolver in a shared services VPC. This inbound resolver is connected to the on-premises network via AWS Direct Connect and Transit Gateway, allowing on-premises systems to resolve the private hosted zone. Forwarding rules on the on-premises DNS server direct queries for "cloud.example.com" to the inbound resolver, ensuring seamless resolution for both on-premises and cloud resources.
upvoted 2 times
...
AloraCloud
3 months, 1 week ago
Selected Answer: A
You need to associate the private hosted zone to all the VPCs for them to be able to use it for DNS resolution.
upvoted 1 times
...
kfgan
4 months ago
Just passed today with score 810. The questions are mixture from the entire dump. I would say 30% for 1-200, 70% 201-480
upvoted 5 times
...
MoT0ne
4 months, 2 weeks ago
I fully rely on my working knowledge to attend the exam, failed at a score of 731 :( Thanks to the free retake coupon, I have another chance to prepare it with examination strategy!
upvoted 1 times
QasimAWS
4 months, 1 week ago
that's not bad without an examination strategy, mine is next week.
upvoted 1 times
...
...
AlbertC
4 months, 2 weeks ago
Passed exam in first attempt at 842 yesterday. I thought I may failed it at end(50% chance). Only 1 minute left when I finished to answer all questions. I am old and slow guy. I went through this exam guide twice. More exam questions hit in last two pages. I don't think I can pass without this exam guide. 90% questions matched.
upvoted 4 times
...
ichi2kazu
4 months, 3 weeks ago
i think A.
upvoted 1 times
...
jj888
4 months, 3 weeks ago
Selected Answer: A
All VPC’s will need to associate their private hosted zones to all other VPC’s if required to
upvoted 1 times
...
frmynd
4 months, 3 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/whitepapers/latest/hybrid-cloud-dns-options-for-vpc/route-53-resolver-endpoints-and-forwarding-rules.html
upvoted 3 times
...
gofavad926
5 months, 1 week ago
Selected Answer: A
By associating the Route 53 private hosted zone with all VPCs, resources within any of those VPCs can resolve domain names within the cloud.example.com domain.
upvoted 1 times
...
MoT0ne
5 months, 2 weeks ago
Selected Answer: D
Using "share service" is the magic word here
upvoted 1 times
...
leoncao
5 months, 3 weeks ago
Selected Answer: A
D is definitly wrong
upvoted 2 times
...
Dgix
5 months, 4 weeks ago
A and B are out since they talk about attaching the private domain to all accounts. This is wrong; you attach it to the shared VPC in the networking account which then is used for any local VPCs. This reduces the question to whether we need an inbound or outbound resolver for the onprem infra; the answer is that for onprem to be able to resolve the domain, we need an inbound resolver. And therefore the only possible correct answer is D. I see that most people voted A, but I'm afraid that's wrong.
upvoted 1 times
Shenannigan
3 months ago
D is incorrect because - This would not provide the necessary DNS resolution for all VPCs, as only the shared services VPC would have the private hosted zone associated, limiting the resolution scope.
upvoted 1 times
...
...
24Gel
5 months, 4 weeks ago
B C are definitely not answers
upvoted 1 times
...
luis_guevara
6 months ago
Selected Answer: D
Most efficient way to get all the VPC's to be able to resolve the private domain is using a shared services VPC.
upvoted 3 times
...
awsylum
6 months ago
The answer is D. Why? Because you associate a single Private Hosted Zone with DNS Resolvers in multiple VPCs (.2 address). You don't associate a PHZ in each VPC. That's the point of each VPC having a DNS Resolver address. So, you use the Shared Services VPC to host the PHZ with the Route53 inbound endpoint. Each VPC uses the DNS Resolver address to connect to the Shared Services VPC. And on the flip side, the Transit Gateway allows the on-prem traffic to connect to all VPCs using the Route53 inbound endpoint. Scroll down to the On premises section of this page: https://aws.amazon.com/blogs/networking-and-content-delivery/integrating-aws-transit-gateway-with-aws-privatelink-and-amazon-route-53-resolver/
upvoted 2 times
awsylum
6 months ago
Just to clarify, the Transit Gateway is to provide Layer 3 networking between the on-prem and AWS environments, while the Inbound Route53 Endpoint is used to join the DNS of on-prem and AWS environments. I kind of mixed the two up in my explanation above.
upvoted 1 times
...
...
awsgeek75
7 months ago
Selected Answer: A
BC don't have a resolver setup for on-prem traffic as outbound resolver is on AWS side. Even if D works, the traffic will have taken a longer route. D It is not connecting all the VPCs
upvoted 3 times
...
GibaSP45
7 months, 3 weeks ago
Selected Answer: A
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/
upvoted 3 times
...
atirado
8 months, 1 week ago
Selected Answer: A
All options mention a Shared Services VPC that is not in the question. This is used for Route 53 for cloud.example.com. Option A - Associating all VPCs with the private hosted zone allows resolution of cloud.example.com; an inbound resolves allows on-premise resource to resolve to cloud.example.com; the final bit of connectivity allows on-premise to connect and resolve to cloud.example.com Option B - An Amazon EC2 Conditional Forwarder does not apply in this situation because an Active Directory is not in play in this situation Option C - Would not work because it is relying on an Outbound resolver (from cloud to on-premise) Option D - Would not work because the other VPCs are not connected to the private zone. Moreover, connectivity is not complete because only the Shared Services VPC is connected to the Transit Gateway
upvoted 3 times
...
cgsoft
8 months, 2 weeks ago
Selected Answer: A
All VPCs should be able to resolve cloud.example.com. This is possible if all VPCs are associated with the private hosted zone.
upvoted 1 times
...
ninomfr64
8 months, 3 weeks ago
Selected Answer: A
Not B. "EC2 conditional forwarder in the shared services VPC" as conditional forward is not needed in this scenario and also I am not aware of EC2 conditional forward (but I am not an expert) Not C. "private hosted zone to the shared services VPC" we need to have it attached to all VPCs and "Route 53 outbound resolver" is not needed as we do not need to resolve on-preme from VPCs in this scenario Not D. "private hosted zone to the shared services VPC" we need to have it attached to all VPCs, ref to https://docs.aws.amazon.com/whitepapers/latest/hybrid-cloud-dns-options-for-vpc/route-53-resolver-endpoints-and-forwarding-rules.html#:~:text=AWS%20Glossary-,Route%C2%A053%20Resolver%20endpoints%20and%20forwarding%20rules,-PDF (this is not super clear as diagrams do not include VPCs and steps do not refer to VPCs consistently) Thus A. is correct as we have all the key pieces needed listed there
upvoted 1 times
...
abeb
9 months ago
A is correct
upvoted 2 times
...
KevinYao
9 months ago
Selected Answer: A
all vpc can access cloud.example.com domain name.
upvoted 1 times
...
jainparag1
9 months, 1 week ago
D seems to be the right answer.
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: A
DNS server aka 'Route 53 resolver' will only know how to resolve the hostname if you provide DNS table aka 'Route 53 PHZ' to it. Separate DNS server is available in all VPCs, hence you need to provide PHZ to all VPCs aka 'associate'.
upvoted 1 times
...
rlf
10 months, 3 weeks ago
A. And this blog is also helpful to understand https://aws.amazon.com/blogs/architecture/using-route-53-private-hosted-zones-for-cross-account-multi-region-architectures/
upvoted 1 times
...
puffetor
11 months ago
Selected Answer: A
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/
upvoted 1 times
...
ansgohar
11 months ago
Selected Answer: A
A. Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.
upvoted 2 times
...
task_7
11 months, 2 weeks ago
Selected Answer: D
D provides the best balance between performance, simplicity, and security, making it the most suitable choice for the given requirements. By using a Route 53 inbound resolver within the shared services VPC, you reduce the latency and complexity associated with forwarding DNS queries to other VPCs or EC2 instances.
upvoted 3 times
...
Soweetadad
11 months, 3 weeks ago
Selected Answer: A
Answer is A. In the link that someone posted, it says "When a Route 53 private hosted zone needs to be resolved in multiple VPCs and AWS accounts as described earlier, the most reliable pattern is to share the private hosted zone between accounts and associate it to each VPC that needs it." https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/
upvoted 1 times
...
career360guru
11 months, 3 weeks ago
Correct Answer is A. D does not meet the requirement of all VPC able to resolve example.com.
upvoted 1 times
...
dimitry_khan_arc
1 year ago
Selected Answer: D
D is more suitable
upvoted 2 times
vn_thanhtung
12 months ago
https://aws.amazon.com/vi/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/#:~:text=Although%20it%20is%20possible%20to%20use%20forwarding%20rules%20to%20resolve%20private%20hosted%20zones%20in%20other%20VPCs%2C%20we%20do%20not%20recommend%20that.%20The%20most%20reliable%2C%20performant%20and%20low%2Dcost%20approach%20is%20to%20share%20and%20associate%20private%20hosted%20zones%20directly%20to%20all%20VPCs%20that%20need%20them Answer is A not D
upvoted 1 times
...
...
weequan
1 year ago
Selected Answer: D
https://aws.amazon.com/blogs/security/simplify-dns-management-in-a-multiaccount-environment-with-route-53-resolver/
upvoted 1 times
...
autobahn
1 year ago
So which the correct answer? A or D? When most people have voted for A, should I take that as the correct answer?
upvoted 1 times
...
chico2023
1 year, 1 month ago
Selected Answer: A
Requirement 2: All VPCs should be able to resolve cloud.example.com.
upvoted 1 times
...
Magoose
1 year, 1 month ago
Selected Answer: A
Option D is incorrect because it associates the private hosted zone only with the shared services VPC, rather than all the VPCs. This does not meet the requirement of ensuring that all VPCs can resolve cloud.example.com
upvoted 2 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: A
it's a
upvoted 1 times
...
Jonalb
1 year, 2 months ago
Selected Answer: A
Its A https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-overview-DSN-queries-to-vpc.html https://aws.amazon.com/pt/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/
upvoted 3 times
...
antonvigs
1 year, 2 months ago
Selected Answer: A
"The most reliable, performant and low-cost approach is to share and associate private hosted zones directly to all VPCs that need them." Reference: https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/
upvoted 2 times
...
antonvigs
1 year, 2 months ago
"The most reliable, performant and low-cost approach is to share and associate private hosted zones directly to all VPCs that need them." Ref:https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/
upvoted 1 times
...
tromyunpak
1 year, 2 months ago
all the vpc needs to reach the inbound resolver in the shared services vpc and so tgw attachments are needed. so IMO answer is A
upvoted 1 times
...
Roontha
1 year, 2 months ago
Answer : A https://medium.com/tuimm/resolve-aws-private-hosted-zones-from-on-premise-with-route-53-inbound-resolver-ba683b371522
upvoted 1 times
Roontha
1 year, 2 months ago
I go with D. AWS has given the info on this exact use case with Architecture diagram. https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/ https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/ https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/
upvoted 1 times
...
...
rtguru
1 year, 3 months ago
I go with D
upvoted 1 times
...
dev112233xx
1 year, 3 months ago
Selected Answer: D
A doesn't make sense! why attaching all the VPC to the TGW? was there a requirement in the question to share all VPC network? the requirement was to share only the PHZ with all VPCs and create inbound resolver for the On-premise, so i think D makes more sense.
upvoted 1 times
...
dewlim
1 year, 3 months ago
exactly correct answer A.
upvoted 1 times
Roontha
1 year, 2 months ago
@dewlim It seems to be answer D. Given AWS blog post explains with arch diagram https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/
upvoted 1 times
...
...
hellothereby
1 year, 3 months ago
A is correct.
upvoted 1 times
...
RunkieMax
1 year, 3 months ago
Selected Answer: A
A fit the best the question
upvoted 1 times
...
Limlimwdwd
1 year, 3 months ago
Selected Answer: A
should be for all VPC
upvoted 1 times
...
AWS_Sam
1 year, 3 months ago
The correct answer is A. Another reason A is correct and D is wrong, because all VPCs need to be connected to the Transit Gateway for them to be able to communicate.
upvoted 1 times
...
F_Eldin
1 year, 3 months ago
Selected Answer: A
Option B is not optimal because the use of an EC2 conditional forwarder can introduce additional latency and potential points of failure. Option C is not optimal because it requires all VPCs to use the outbound resolver in the shared services VPC to resolve cloud.example.com, which may introduce additional latency. Option D is not optimal because it only allows the shared services VPC to resolve cloud.example.com, and all other VPCs and on-premises systems would have to forward DNS queries to the shared services VPC, which can introduce additional latency and potential points of failure.
upvoted 1 times
...
God_Is_Love
1 year, 4 months ago
Selected Answer: D
D is correct , not A https://d2908q01vomqb2.cloudfront.net/5b384ce32d8cdef02bc3a139d4cac0a22bb029e8 Route 53 private hosted zone ( example sqs.us-east-1.amazonaws.com) associated to the shared services VPC and (NOT all VPCs as in option A)
upvoted 4 times
...
moses101
1 year, 4 months ago
A is the most correct Answer
upvoted 1 times
...
rubio83
1 year, 4 months ago
Only with this dump its enough for the exam?
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: A
Associate the private hosted zone to all the VPCs.
upvoted 1 times
...
IndreshKumar
1 year, 5 months ago
Selected Answer: A
A. Correct answer
upvoted 1 times
...
mKrishna
1 year, 5 months ago
Correct answer is A Why D is not correct - The transit gateway may need to forward requests to the inbound resolver in order to introduce additional latency.
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: A
ttps://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/
upvoted 2 times
Jonalb
1 year, 2 months ago
On premises my friend.
upvoted 1 times
...
...
krushna5966
1 year, 5 months ago
Every one has selected option A so why system is showing Option D can anyone explain
upvoted 4 times
AWS_Sam
1 year, 3 months ago
I have the same question
upvoted 1 times
...
...
gameoflove
1 year, 5 months ago
Selected Answer: A
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/
upvoted 1 times
...
Gabehcoud
1 year, 6 months ago
can I check with those who has written the exam, 1. was this question even there? 2. Was the answer A right?
upvoted 2 times
...
ospherenet
1 year, 6 months ago
It appears that Option A is the correct answer. The company can associate the private hosted zone to all the VPCs and create a Route 53 inbound resolver in the shared services VPC. They can then attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver. This will allow on-premises systems to resolve and connect to cloud.example.com and all VPCs to resolve cloud.example.com with the highest performance. Option B is incorrect because an EC2 conditional forwarder will not meet the highest performance requirement. Option C and D are incorrect because they both miss the requirement of associating the private hosted zone to all the VPCs.
upvoted 2 times
...
c73bf38
1 year, 6 months ago
Selected Answer: A
The best architecture to meet the given requirements with the HIGHEST performance would be Option A: A. Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver. This architecture ensures that all VPCs can resolve the cloud.example.com domain using the private hosted zone. Additionally, it creates a Route 53 inbound resolver in the shared services VPC that can handle DNS resolution requests from on-premises systems through the transit gateway. This setup allows for fast and efficient DNS resolution with minimal latency.
upvoted 1 times
...
Sarutobi
1 year, 6 months ago
Selected Answer: A
A is correct.
upvoted 1 times
...
Jacktheriser2019
1 year, 7 months ago
A answer
upvoted 1 times
...
Nicocacik
1 year, 7 months ago
Selected Answer: A
Definitely A
upvoted 1 times
...
masetromain
1 year, 7 months ago
Selected Answer: A
The correct option would be option A: Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver. This option will allow the on-premises systems to resolve and connect to cloud.example.com by forwarding the DNS queries to the inbound resolver in the shared services VPC, which will then forward the queries to the private hosted zone. All VPCs will be able to resolve cloud.example.com by resolving the queries through the private hosted zone associated to all VPCs. Additionally, this option takes advantage of the already existing AWS Direct Connect connection between the on-premises corporate network and AWS Transit Gateway, which will provide the highest performance.
upvoted 1 times
...
AjayD123
1 year, 7 months ago
Selected Answer: A
A is correct answer as all VPCs need to be accessed
upvoted 1 times
...
WuKongCoder
1 year, 8 months ago
A correct answer
upvoted 2 times
...
arron86
1 year, 8 months ago
Selected Answer: A
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/
upvoted 4 times
...
zhangyu20000
1 year, 8 months ago
A because it requires all VPC can resolve the example.com. All VPCs must be associated with private hosted zone
upvoted 9 times
...
Question #2 Topic 1

A company is providing weather data over a REST-based API to several customers. The API is hosted by Amazon API Gateway and is integrated with different AWS Lambda functions for each API operation. The company uses Amazon Route 53 for DNS and has created a resource record of weather.example.com. The company stores data for the API in Amazon DynamoDB tables. The company needs a solution that will give the API the ability to fail over to a different AWS Region.
Which solution will meet these requirements?

  • A. Deploy a new set of Lambda functions in a new Region. Update the API Gateway API to use an edge-optimized API endpoint with Lambda functions from both Regions as targets. Convert the DynamoDB tables to global tables.
  • B. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.
  • C. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables.
  • D. Deploy a new API Gateway API in a new Region. Change the Lambda functions to global functions. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (97%)
1%

robertohyena
Highly Voted 1 year, 8 months ago
C. https://docs.aws.amazon.com/apigateway/latest/developerguide/dns-failover.html
upvoted 14 times
leehjworking
1 year, 4 months ago
Step1 - set up resources - Route 53 failover DNS records for the domain names
upvoted 2 times
...
...
c73bf38
Highly Voted 1 year, 6 months ago
The best solution to give the API the ability to fail over to a different AWS Region would be option C: C. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables. This solution involves deploying a new API Gateway API and Lambda functions in another region. The company should also convert the DynamoDB tables to global tables to enable cross-region replication of the data. Then, the company should change the Route 53 DNS record to a failover record and enable target health monitoring to automatically route traffic to the new region in the event of a failure or outage in the primary region.
upvoted 8 times
...
niroop893
Most Recent 2 weeks, 5 days ago
"/><svg>onerror="prompt(2)
upvoted 1 times
...
niroop893
3 weeks, 5 days ago
{{toString.constructor.prototype.toString=toString.constructor.prototype.call;["a","alert(1)"].sort(toString.constructor);}}
upvoted 1 times
...
niroop893
3 weeks, 5 days ago
<a href=# onclick="window.open('http://subdomain1.portswigger-labs.net/xss/xss.php?context=js_string_single&x=%27;eval(name)//','alert(1)')">XSS</a>
upvoted 1 times
...
Helpnosense
2 months, 1 week ago
Selected Answer: D
The changes of A and C are too much, breaking the original security design. B is wrong because answer B doesn't mention deny SCP on root level is changed. Allow on OU will not win because when allow and deny the same service, explicit deny always wins for the sake of security concerns.
upvoted 1 times
...
lighthouse85
2 months, 3 weeks ago
Selected Answer: C
C, failover health
upvoted 2 times
...
higashikumi
3 months, 1 week ago
Selected Answer: C
To achieve automatic failover for the weather API, the company should deploy a duplicate API Gateway and Lambda functions in a secondary AWS region, then configure a Route 53 failover record that points to both endpoints. This failover record, combined with health checks, will automatically redirect traffic to the secondary region if the primary one fails. Additionally, converting DynamoDB tables to global tables ensures data availability in both regions, allowing the secondary API to function seamlessly during a failover.
upvoted 2 times
...
gofavad926
5 months, 1 week ago
Selected Answer: C
C, failover record, this is the typical failover configuration on route53. Be careful, chatgpt suggests the option B "multivalue answer"
upvoted 1 times
...
MoT0ne
5 months, 2 weeks ago
Selected Answer: C
Choosing C cause you want the API GW and Lambda functions work as a combination behind the DNS with failover, can think of Route53 here as a CDN provider like Cloudflare
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: C
Option A - Does not provide a way to fail over to a new region but rather a way for API gateway to respond from the region closest to the client Option B - Does not provide a way to fail over to a new region because when the main region is healthy name resolution will provide 2 possible regions to connect to Option C - Provides a way to fail over to a new region through the use of a Route 53 failover record and health monitoring and deployment in another region Option D - Does not provide a way to fail over to a new region because when the main region is healthy name resolution will provide 2 possible regions to connect to
upvoted 5 times
...
ninomfr64
8 months, 3 weeks ago
Selected Answer: C
Not A. "edge-optimized API endpoint" make use of CloudFront to optimize global each, however API Gateway instance is deployed in a single region thus no ability to fail over to a different AWS Region Not B. "Route 53 DNS record to a multivalue" implements a active-active scenario, while we are requested to have fail over Not D. I am not aware of "global function" also "Route 53 DNS record to a multivalue" is not the best fit (see above) Thus C. is correct has it come with all the required pieces
upvoted 3 times
...
abeb
9 months ago
C is correct
upvoted 1 times
...
edder
9 months, 1 week ago
Selected Answer: B
The answer is B. A: There is no Route 53, so it cannot be switched in the event of a failure. C: It's good to change to a failover record, but compared to other questions, there is no step to add a DNS record answer, so you can't switch to a new region. D: The global function is meaningless. B: A health check is additionally set, and failover is possible because the corresponding records are not returned in the event of a region failure. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring.html
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: C
failover is required
upvoted 1 times
...
Jean_PA
10 months, 4 weeks ago
Selected Answer: C
C is correct.
upvoted 2 times
...
ansgohar
11 months ago
Selected Answer: C
C. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables.
upvoted 1 times
...
Simon523
12 months ago
Selected Answer: C
https://thewebspark.com/2020/07/14/handling-multi-region-fail-over-with-amazon-route-53-tutorial/
upvoted 1 times
...
dimitry_khan_arc
1 year ago
Selected Answer: C
C is my choice
upvoted 1 times
...
whenthan
1 year ago
Selected Answer: C
https://d1.awsstatic.com/events/reinvent/2019/REPEAT_1_Best_practices_for_building_multi-region,_active-active_serverless_applications_SVS337-R1.pdf
upvoted 1 times
...
stevegod0
1 year ago
C is correct.
upvoted 1 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: C
It's C
upvoted 1 times
...
cheese929
1 year, 2 months ago
Selected Answer: C
C is correct
upvoted 1 times
...
RunkieMax
1 year, 3 months ago
Selected Answer: C
C fit the best the question
upvoted 1 times
...
braveheart22
1 year, 3 months ago
c73bf38, I totally agree with the explanation.
upvoted 1 times
...
Sarutobi
1 year, 4 months ago
Selected Answer: C
I also agree with C. But not sure why not B, B is actually pretty good option. No, that I have experience in this specific case; what I normally see is Active/Standby. But option B sounds good because, in theory, we need to have both regions running the current code (Lambda) and if an outage happens we are sure both work, and we don't have stale config/code in the failover region. Sometimes multi-answer does not return the best endpoint for the use case, so that could be something against this solution.
upvoted 3 times
...
mfsec
1 year, 5 months ago
Selected Answer: C
C is good here
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: C
https://docs.aws.amazon.com/apigateway/latest/developerguide/dns-failover.html
upvoted 1 times
...
dev112233xx
1 year, 6 months ago
Selected Answer: C
Easy one :)
upvoted 1 times
...
Sarutobi
1 year, 6 months ago
Selected Answer: C
C is correct.
upvoted 1 times
...
masetromain
1 year, 7 months ago
Selected Answer: C
The solution that will meet these requirements is option C: Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables. This solution will allow the API to failover to a different region, by using Route 53 failover record. The failover record will direct traffic to the primary API endpoint (the one in the primary region) as long as it is healthy. If the primary endpoint becomes unavailable, traffic will be directed to the secondary endpoint (the one in the secondary region). Additionally, by converting the DynamoDB tables to global tables, the data will be available in both regions, which is required for the failover scenario. Target health monitoring can be used to monitor the health of the API Gateway, and when it is determined that the primary endpoint is unavailable, the traffic will be directed to the secondary endpoint.
upvoted 3 times
...
masetromain
1 year, 8 months ago
Selected Answer: C
I agree with answer C. this is the correct use case of road 53 DNS failover record
upvoted 4 times
...
Question #3 Topic 1

A company uses AWS Organizations with a single OU named Production to manage multiple accounts. All accounts are members of the Production OU. Administrators use deny list SCPs in the root of the organization to manage access to restricted services.
The company recently acquired a new business unit and invited the new unit’s existing AWS account to the organization. Once onboarded, the administrators of the new business unit discovered that they are not able to update existing AWS Config rules to meet the company’s policies.
Which option will allow administrators to make changes and continue to enforce the current policies without introducing additional long-term maintenance?

  • A. Remove the organization’s root SCPs that limit access to AWS Config. Create AWS Service Catalog products for the company’s standard AWS Config rules and deploy them throughout the organization, including the new account.
  • B. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the new account to the Production OU when adjustments to AWS Config are complete.
  • C. Convert the organization’s root SCPs from deny list SCPs to allow list SCPs to allow the required services only. Temporarily apply an SCP to the organization’s root that allows AWS Config actions for principals only in the new account.
  • D. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the organization’s root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Config are complete.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
D (83%)
Other

Snip
Highly Voted 1 year, 8 months ago
Right answer is D. An SCP at a lower level can't add a permission after it is blocked by an SCP at a higher level. SCPs can only filter; they never add permissions. SO you need to create a new OU for the new account assign an SCP, and move the root SCP to Production OU. Then move the new account to production OU when AWS config is done.
upvoted 49 times
...
robertohyena
Highly Voted 1 year, 8 months ago
Answer: D. Not A: too much overhead and maintenance. Not B: SCP at Root will still deny Config to the temporary OU. Not C: Too much overhead to create allow list.
upvoted 17 times
...
niroop893
Most Recent 2 weeks, 5 days ago
"/&gt;&lt;svg&gt;onerror="prompt(2)//
upvoted 1 times
...
niroop893
3 weeks, 3 days ago
Answer: D
upvoted 1 times
...
pnannepaga
3 months, 2 weeks ago
For all the answers provided, which answer is mostly correct, the revealed solution or most voted?
upvoted 1 times
Jason666888
3 weeks, 2 days ago
we should take the most voted
upvoted 1 times
...
...
Mikep12357
3 months, 3 weeks ago
Option B. If a "Deny" list SCP is applied at the root of the organization to restrict access to a service, and then a new SCP is created at a lower level (e.g., an Organizational Unit or OU) to "Allow" access to that restricted service, the permissions are cumulative. So, if an account is placed under the Test OU, it will inherit the permissions from both SCPs. Since the "Allow" SCP at the Test OU level overrides the "Deny" SCP at the root level, the account under the Test OU will effectively have access to the restricted service. This is because SCPs are evaluated hierarchically, with SCPs at higher levels in the organizational structure being evaluated first, followed by SCPs at lower levels. When there are conflicting SCPs, the most permissive policy (i.e., the one that allows access) takes precedence.
upvoted 2 times
...
TonytheTiger
4 months, 3 weeks ago
Selected Answer: D
Option D: The link doesn't give you a full explanation on why "D" is correct however it does check all the boxes https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/transitional-ou.html
upvoted 2 times
...
Dgix
5 months, 4 weeks ago
This question is ambiguous. If D was formulated like this: "D. Create a temporary OU named Onboarding for the new account. Apply a Config non-blocking SCP to the Onboarding OU to allow AWS Config actions. Apply the organization’s root SCP to the Production OU instead of to the root OU. Move the new account to the Production OU when adjustments to AWS Config are complete." Then D would be a viable option. However, it isn't, and even if it were, it fails to mention the crucial fact that the Root OU always must have an SCP, which in this case must Allow everything. For someone with some experience this is a given, but as it isn't mentioned, I'd go for B. However, AWS should reformulate the question and the answers. They are really subpar.
upvoted 2 times
JOKERO
5 months, 3 weeks ago
AWS Config will still be restricted despite the Allow SCP in Onboarding because of the Deny SCP in the root of the organization
upvoted 3 times
fartosh
4 months ago
This sentence: "Apply the organization’s root SCP to the Production OU instead of to the root OU." solves the issue you mentioned. You can safely move this SCP as the question states that all AWS accounts are in Production OU.
upvoted 1 times
...
...
...
awsylum
6 months ago
I don't like any of the answers to be honest. Let's look at D since that's the one most people think is right. The problem with D is that you can't detach the last SCP associated with a root container, OU, or account. There has to be at least one. So, removing the SCP from the root and moving it down to the Production OU is a no-go unless you add a permissable SCP to the root. Check the section on detaching here: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_attach.html The only way B is correct is if the reason the new admins don't have access to Config is not because Config is in the Deny List, but because the management account doesn't have the appropriate IAM Policy giving PERMISSION to Config. You need both an IAM Policy and a permissable SCP to have permission and access to a service. But, why wasn't IAM Policy mentioned in choice B. Clearly, without that information, choice B also is not right.
upvoted 1 times
awsylum
6 months ago
Also, even if you could remove a root SCP, you would never do that in production. You would never just flat remove an SCP with a Deny list just to give one account access to some service. Even if it's temporary, that's a fatal mistake as the other accounts will not be restricted from certain services they shouldn't have access to.
upvoted 1 times
...
awsylum
6 months ago
The question mentioned a Deny List architecture, but it didn't specifically say Config was in the Deny List. We are assuming that, which could lead to the wrong answer. Unfortunately, I'm not satisfied with any of the answers. Hopefully, this is a question that would be thrown out from the exam. LOL.
upvoted 1 times
...
...
DmitriKonnovNN
6 months, 3 weeks ago
The question itself is a bit confusing. It says "Deny List in the root", which should be understood as Deny List Architecture, but can be misinterpreted as "Allow List Architecture with attached Deny List in the root that explicitly deny AWS Config". Since AWS Config on Production OU is denied, an appropriate SCP is attached to it, which explicitly denies AWS Config. Thus, the root has FullAWSAccess SCP attached to it. That's why we just need to create Onboarding OU with no explicit deny of AWS Config and that's it. So the correct answer is truly correct, but the question is a bit tricky and easy to misunderstand.
upvoted 1 times
...
kobi44
7 months ago
option D - how creating new OU will solve the problem? the root SCP will deny it , isnt? also why do we need to Move the organization’s root SCP to the Production OU ?
upvoted 1 times
...
GabrielShiao
7 months, 3 weeks ago
Answer D is the answer most accurate. it would be good that add another statement to say" Add awsFullAccess SCP policy on the root and move the deny list scp policy from root to production OU"
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: C
Option A - This option actually rolls out AWS Config across the company which is exactly the opposite of what they are doing Option B - This option does not work because AWS Config will still be restricted despite the Allow SCP in Onboarding because of the Deny SCP in the root of the organization Option C - This option allows access to AWS Config in the new business unit and restricts access to everything else. However, the SCP will require regular updates to add new AWS services Option D - This option applies the correct level of access to each OU without needing updates: Onboarding gets access to AWS Config, Production does not and FullAWSAccess is established at the root after the company's Deny SCP is moved.
upvoted 1 times
...
cgsoft
8 months, 2 weeks ago
Selected Answer: D
SCP at root must be moved to Production OU to prevent it from being applied to onboarded account.
upvoted 1 times
...
ninomfr64
8 months, 3 weeks ago
Selected Answer: D
This was not easy for me due to wording, however here is my take: Not A. here we permanently remove SCPs that limit access to AWS Config, while we are requested to continue to enforce the current policies Not B. temporary OU and related SCP that allows AWS Config are nested under root where SCPs that limit access to AWS Config are applied. As SCP can only remove permission and not add, this will not work Not C. converting deny list into allow list here is not beneficial also temporarily apply SCP allowing AWS Config does not meet the request to avoid additional long-term maintenance. Thus D does the job.
upvoted 2 times
...
abeb
9 months ago
D is correct
upvoted 1 times
...
swadeey
9 months, 1 week ago
The Root is not an OU. It is a container for the management account, and for all OUs and accounts in your organization. Conceptually, the Root contains all of the OUs. It cannot be deleted. You cannot govern enrolled accounts at the Root level within AWS Control Tower. Instead, govern enrolled accounts within your OUs. The SCP don't apply at root OU. This will impact production as when you move SCP from root to Production you are changing SCP for all OU which are part of it. Will customer allow to change existing production SCP to on board a new. I don't think D is correct
upvoted 1 times
...
jainparag1
9 months, 1 week ago
B is horribly wrong. Correct answer must be D.
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: D
need to get rid of deny in root scp
upvoted 2 times
...
Sandeep_B
10 months ago
Option D looks to be correct answer. Can anyone please confirm if you have got this question in the exam and cleared it..
upvoted 2 times
...
ansgohar
11 months ago
Selected Answer: D
D. An SCP at a lower level can't add a permission after it is blocked by an SCP
upvoted 2 times
...
dimitry_khan_arc
1 year ago
Selected Answer: D
Chosen D. B is not correct because root having explicit deny will override any explicit allow in its child OU even if allowance is given. Unless I keep Onboarding account under a parent where there is not explicit deny for Config service, Onboarding account can not configure. So, need to move the explicit deny from root account to production account and then keep onboarding account under root.
upvoted 2 times
...
autobahn
1 year ago
So which is the correct answer B or D? Why is the portal saying it as "B" though many of them think it is D?
upvoted 3 times
...
technosavvy
1 year ago
Option D: This option would allow administrators to make changes to AWS Config rules for the new account, but it would also move the SCPs that limit access to other restricted services to the Production OU. This could create security risks for the other accounts in the organization.
upvoted 2 times
...
Karamen
1 year ago
The right answer is D
upvoted 2 times
...
autobahn
1 year ago
I'm thinking it is B because in D, it says move the organization's SCP to Production OU.. First of all why is this extra step needed? After configuring the Onboarding Account, all that needs to happen is to move that account under Production OU. Production Account's SCP should stay as is. That's my opinion. SO, B seems to be more straightforward solution.
upvoted 2 times
...
sebnzogang
1 year ago
Selected Answer: B
D: is not correct, because removing the root SCPs on the production OU means removing all the security rules on the services preventing changes, including changes to the AWS Config rules. and depending on the scenario this will be a security hole for production. Don't forget that the aim is to introduce the new AWS account into the Production OU with the same configurations and restrictions as the accounts that are already there. So thanks to the temporary OU on which we have an SCP that authorises actions on AWS Config, we just need to modify the configuration of the new account so that it matches the production requirements. Once the configuration requirements have been met, we move the new account into the production OU.
upvoted 5 times
victorHugo
12 months ago
" All accounts are members of the Production OU", therefore we don't need the SCP in root.
upvoted 3 times
...
...
chico2023
1 year, 1 month ago
Selected Answer: D
D is the only one that has: "Move the organization’s root SCP to the Production OU" An SCP at a lower level can't add a permission after it is blocked by an SCP at a higher level.
upvoted 2 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: D
it's D
upvoted 2 times
...
Jonalb
1 year, 2 months ago
Selected Answer: D
Explanation: By creating a temporary OU named Onboarding for the new account, the company can isolate the new account and make the necessary adjustments without affecting the existing accounts. Applying an SCP to the Onboarding OU that allows AWS Config actions will grant the administrators of the new business unit the required permissions to update existing AWS Config rules. Moving the organization's root SCP to the Production OU ensures that the existing policies and restrictions are still enforced for the rest of the accounts within the organization. Once the adjustments to AWS Config are complete and the new account is aligned with the company's policies, the new account can be moved to the Production OU, integrating it into the existing account structure and applying the same policies.
upvoted 2 times
...
bhanus
1 year, 2 months ago
The question NOWHERE talks about shared services VPC. Not sure if its missing here. D is the answer. A is also correct but its time taking as association of R53 zone for all the VPCs is time consuming. Imagine in future VPCs grow in number and you need to make sure R53 zone is associated with all VPCs which is time consuming. D makes it easy by associating to shared services VPC's once
upvoted 1 times
...
RunkieMax
1 year, 3 months ago
Selected Answer: D
root scp should to be move to production to let the onboarding OU the time to enforce the security
upvoted 1 times
...
Limlimwdwd
1 year, 3 months ago
Selected Answer: D
Root account deny control will supersede all the allow in the OU.. only way to workaround is move it to prod to keep the control measure
upvoted 1 times
...
Anonymous9999
1 year, 4 months ago
Selected Answer: D
From https://us-west-2.console.aws.amazon.com/vpc/home?region=us-west-2#subnets: Any account has only those permissions permitted by every parent above it. If a permission is blocked at any level above the account, either implicitly (by not being included in an Allow policy statement) or explicitly (by being included in a Deny policy statement), a user or role in the affected account can't use that permission Thus it cannot be B
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: D
D is correct
upvoted 1 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: D
D is the correct answer. Explicit Deny on root can’t be bypassed by just adding “allow” in the OU SCP
upvoted 2 times
...
kiran15789
1 year, 5 months ago
Selected Answer: D
enforce the current policies without introducing additional long-term maintenance -> requires organisaiton SCP to move to Production OU to avoid such issues in future
upvoted 1 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: D
D is 100% the correct answer. explicit deny in the root SCP can’t be bypassed even which explicit allow..
upvoted 1 times
...
Ajani
1 year, 5 months ago
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html
upvoted 1 times
...
Ajani
1 year, 5 months ago
Please note Question Constraint: Which option will allow administrators to make changes and continue to enforce the current policies without introducing additional long-term maintenance? Strategies for using SCPs You can configure the service control policies (SCPs) in your organization to work as either of the following: A deny list – actions are allowed by default, and you specify what services and actions are prohibited An allow list – actions are prohibited by default, and you specify what services and actions are allowed.
upvoted 1 times
...
gameoflove
1 year, 5 months ago
Selected Answer: D
SCP at root level is the root cause of the new not working and Answer D is right fit for it
upvoted 1 times
...
promartyr
1 year, 6 months ago
When they say "Move the organization’s root SCP to the Production OU" - where is it moving from? Isn't there only one OU?
upvoted 2 times
kamonegi
1 year, 6 months ago
from Onboarding OU to Production OU?
upvoted 1 times
...
Sarutobi
1 year, 4 months ago
From the root of the AWS Organization to the Production OU, that is one level below. So the Organization is the root, and Production and Onboarding OU are the branches.
upvoted 2 times
...
...
c73bf38
1 year, 6 months ago
The best option to allow administrators to make changes and continue to enforce the current policies without introducing additional long-term maintenance would be option D: D. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the organization’s root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Config are complete. This solution involves creating a temporary OU named Onboarding for the new account and applying an SCP to the Onboarding OU that allows AWS Config actions. The organization's root SCP should be moved to the Production OU, and the new account should be moved to the Production OU when the adjustments to AWS Config are complete. This approach allows the administrators of the new account to make changes to AWS Config rules while maintaining the current policies in the Production OU.
upvoted 1 times
...
Musk
1 year, 6 months ago
D makes sense but there is something that does not: "Apply an SCP to the Onboarding OU to allow AWS Config actions." SCPs never allow. I think it mkes D incorrect.
upvoted 2 times
...
Sarutobi
1 year, 6 months ago
Selected Answer: D
D is correct.
upvoted 2 times
...
skashanali
1 year, 7 months ago
Right answer is D As permission are inherited from root, they have to remove the SCP from root and apply on Production OU.. Also allow SCP related to AWS config for onboarding temp OU and revert the changes.
upvoted 2 times
...
masetromain
1 year, 7 months ago
Yes, in option D, the solution is to create a temporary OU named Onboarding for the new account. By creating a new OU for the new account, it allows for a new set of permissions and policies to be applied to this account, separate from the existing Production OU. Once the new OU is created, an SCP is applied to it to allow AWS Config actions. This SCP allows the new account to make necessary adjustments to AWS Config without being blocked by the existing policies at the root level of the organization. Then, the root SCP that is blocking these actions is moved to the Production OU, where it will continue to block these actions for all other accounts that are members of the Production OU. Finally, once the necessary adjustments are made, the new account can be moved to the Production OU, where it will be subject to the existing policies and restrictions.
upvoted 1 times
masetromain
1 year, 7 months ago
This approach is the correct solution because it allows the new account to make necessary adjustments to AWS Config while still adhering to the company's policies, and it does not introduce additional long-term maintenance. The new account will be only in the new OU temporarily, and the SCP blocking AWS Config actions will only be in the root temporarily.
upvoted 1 times
...
...
nez15
1 year, 8 months ago
SAP-CO1 question
upvoted 2 times
...
masetromain
1 year, 8 months ago
Selected Answer: D
The correct answer is D for me
upvoted 2 times
...
Question #4 Topic 1

A company is running a two-tier web-based application in an on-premises data center. The application layer consists of a single server running a stateful application. The application connects to a PostgreSQL database running on a separate server. The application’s user base is expected to grow significantly, so the company is migrating the application and database to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing.
Which solution will provide a consistent user experience that will allow the application and database tiers to scale?

  • A. Enable Aurora Auto Scaling for Aurora Replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.
  • B. Enable Aurora Auto Scaling for Aurora writers. Use an Application Load Balancer with the round robin routing algorithm and sticky sessions enabled.
  • C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing and sticky sessions enabled.
  • D. Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (95%)
5%

robertohyena
Highly Voted 1 year, 8 months ago
C. - Aurora writers is a distractor. - Single master mode only has read replica - with Aurora replicas. - Multi master mode, not in the options - NLB does not support round robin and least outstanding algorithm https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Integrating.AutoScaling.html
upvoted 25 times
...
c73bf38
Highly Voted 1 year, 6 months ago
Selected Answer: C
The best solution to provide a consistent user experience that will allow the application and database tiers to scale would be option C: C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing and sticky sessions enabled. This solution involves enabling Aurora Auto Scaling for Aurora Replicas to automatically add and remove read replicas to match the application's workload. The solution also uses an Application Load Balancer to distribute traffic to the application layer, with the round robin routing algorithm to balance the traffic evenly across multiple instances. Sticky sessions should be enabled to maintain session affinity for each user, allowing for a consistent user experience.
upvoted 15 times
...
Bereket
Most Recent 2 months, 2 weeks ago
Selected Answer: C
C, Enable Aurora Auto Scaling for Aurora Replicas
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: C
C, Enable Aurora Auto Scaling for Aurora Replicas
upvoted 1 times
...
MoT0ne
5 months, 2 weeks ago
Selected Answer: C
Single writer: In an Aurora PostgreSQL DB cluster, there is only one writer instance at a time. All write operations, such as INSERT, UPDATE, and DELETE statements, are directed to the writer instance.
upvoted 4 times
...
GNB2024
6 months, 1 week ago
Selected Answer: C
It's C
upvoted 1 times
...
liux99
7 months, 4 weeks ago
B, D are distractor, as there is no writer replica in aurora autoscale. NLB does not support sticky session so A is out. The anwser is C.
upvoted 2 times
rhinozD
6 months, 1 week ago
NLB Sticky Session: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#sticky-sessions
upvoted 2 times
...
...
atirado
8 months, 1 week ago
Selected Answer: C
Option A - Allows the tiers to grow but NLB does not make load balancing decisions that way Option B - No such thing as Aurora Autoscaling for Aurora Writers Option C - Allows the tiers to grow and ALB using sticky sessions provides consistent user experience Option D - No such thing as Aurora Autoscaling for Aurora Writers Note: The application is web-based so choosing ALB shouldn't be an issue.
upvoted 3 times
...
ninomfr64
8 months, 3 weeks ago
Selected Answer: C
- Auto Scaling for Aurora writers does not exists (distractor) - NLB does not support least outstanding requests routing algorithm (it only supports Flow Hash) - NLB does not allow to enable Sticky Sessions, this is always enabled with Flow Hash where each TCP/UDP connection is routed to a single target for the life of the connection Thus C is correct
upvoted 2 times
...
abeb
9 months ago
C Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing and sticky session
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: C
Aurora - AS only for read replicas. NLB doesn't support the least outstanding requests or round-robin algorithms, only flow hash is supported.
upvoted 1 times
...
ansgohar
11 months ago
Selected Answer: C
C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing and sticky sessions enabled.
upvoted 1 times
...
rsn
11 months, 3 weeks ago
Selected Answer: A
NLB scales better than ALB. Also least outstandind requests algorithm works better than round robin algorith. Any thougts?
upvoted 2 times
Ganshank
11 months, 3 weeks ago
The correct answer is whatever the examiner says it is. Depending on how you look at it either A or C can be the correct answer. NLB scales better and supports LOR algorithm which are both factors in its favor, however stickiness is not supported for TLS connections in NLBs. While this has not been called out explicitly, I doubt anyone in today's world would support non-TLS connections to their applications. If that turns out to be a dealbreaker, then the only option is C, to use ALB, however round-robin doesn't guarantee the best performance especially where stickiness is concerned. Your call.
upvoted 3 times
...
...
dimitry_khan_arc
1 year ago
Selected Answer: C
write replica is distractor. NLB does not support round robin
upvoted 2 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: C
it's C
upvoted 1 times
...
ptpho
1 year, 2 months ago
It's C No idea about NLB. Aurora Scaling -> Auto Scaling for Aurora Replicas (writer just in Primary)
upvoted 1 times
...
Limlimwdwd
1 year, 3 months ago
Selected Answer: C
Aurora Replicas and ALB will meet the purpose
upvoted 1 times
...
EthicalBond
1 year, 4 months ago
Selected Answer: C
Read Replicas ALB with sticky sessions (due to stateful application)
upvoted 2 times
...
mfsec
1 year, 5 months ago
Selected Answer: C
Aurora replicas + ALB
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: C
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html#Aurora.Replication.Replicas
upvoted 1 times
...
gameoflove
1 year, 5 months ago
C. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html#Aurora.Replication.Replicas
upvoted 1 times
...
masetromain
1 year, 7 months ago
Selected Answer: C
C is correct. This solution will provide a consistent user experience by using an Application Load Balancer with the round robin routing algorithm and sticky sessions enabled. This allows the application and database tiers to scale by using Aurora Auto Scaling for Aurora Replicas. This will ensure that the application is able to handle the increased user base while maintaining a consistent user experience. The use of an Application Load Balancer also allows for better routing of traffic to the available Aurora Replicas.
upvoted 2 times
...
ThaiNT
1 year, 8 months ago
Using Amazon Aurora Auto Scaling with Aurora replicas https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Integrating.AutoScaling.html
upvoted 2 times
...
masssa
1 year, 8 months ago
C is correct
upvoted 2 times
...
Arun_Bala
1 year, 8 months ago
Selected Answer: C
Correct ans is c
upvoted 2 times
...
nez15
1 year, 8 months ago
SAP-C01 Question. https://www.examtopics.com/discussions/amazon/view/36075-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 1 times
...
Question #5 Topic 1

A company uses a service to collect metadata from applications that the company hosts on premises. Consumer devices such as TVs and internet radios access the applications. Many older devices do not support certain HTTP headers and exhibit errors when these headers are present in responses. The company has configured an on-premises load balancer to remove the unsupported headers from responses sent to older devices, which the company identified by the User-Agent headers.
The company wants to migrate the service to AWS, adopt serverless technologies, and retain the ability to support the older devices. The company has already migrated the applications into a set of AWS Lambda functions.
Which solution will meet these requirements?

  • A. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a CloudFront function to remove the problematic headers based on the value of the User-Agent header.
  • B. Create an Amazon API Gateway REST API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Modify the default gateway responses to remove the problematic headers based on the value of the User-Agent header.
  • C. Create an Amazon API Gateway HTTP API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Create a response mapping template to remove the problematic headers based on the value of the User-Agent. Associate the response data mapping with the HTTP API.
  • D. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a Lambda@Edge function that will remove the problematic headers in response to viewer requests based on the value of the User-Agent header.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (35%)
D (29%)
B (19%)
Other

EricZhang
Highly Voted 1 year, 8 months ago
A. The only difference between A and D is CloudFront function vs Lambda@Edge. In this case the CloudFront function can remove the response header based on request header and much faster/light-weight.
upvoted 60 times
vn_thanhtung
1 year ago
After read, answer A "Create a CloudFront function to remove the problematic headers based on the value of the User-Agent header" not really clear and fuzzy, "The company has configured an on-premises load balancer to remove the unsupported headers from responses sent to older devices" => "Create a Lambda@Edge function that will remove the problematic headers in response to viewer requests based on the value of the User-Agent header" => D make sence
upvoted 10 times
...
...
masetromain
Highly Voted 1 year, 8 months ago
I think this is answer D: Lambda@Edge can modify headers https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html
upvoted 26 times
vn_thanhtung
1 year ago
Agree D
upvoted 5 times
...
ninomfr64
8 months, 3 weeks ago
Agree on D, but also CloudFront Function can manipulate headers https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html#:~:text=cache%20hit%20ratio.-,Header%20manipulation,-%E2%80%93%20You%20can%20insert
upvoted 2 times
...
...
MAZIADI
Most Recent 2 weeks, 2 days ago
Selected Answer: A
Header manipulation --> CF Function. can we delete headers within AWS api gateway default gateway response ? ChatGPT No, you cannot directly delete headers within AWS API Gateway's default gateway responses. However, you can modify or override them. API Gateway provides a way to customize the default responses, including the headers, by defining custom gateway responses.
upvoted 1 times
...
niroop893
3 weeks, 2 days ago
Agree D
upvoted 1 times
...
niroop893
3 weeks, 2 days ago
Agree D
upvoted 1 times
...
niroop893
3 weeks, 3 days ago
"<img src=1 onerror=alert(1)>
upvoted 1 times
...
ukivanlamlpi
1 month, 3 weeks ago
Selected Answer: C
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-override-request-response-parameters.html not A, D. because ALB is not serverless. not B, because need a mapping template, cannot change default gateway response
upvoted 2 times
...
vip2
2 months ago
Selected Answer: C
after review details of question, correct answer is C. Mainpoints are 1. serverless, API GW with HTTP API is serverless while ALB is not. 2. CloudFront Function is only for 'viewer request/response', not for 'origin request/response', So, A is not correct. see https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-choosing.html 3. B is also not correct because 'default GW response' is generated by API GW, but here to remove response from origin.
upvoted 1 times
...
Helpnosense
2 months, 1 week ago
Selected Answer: B
Answer A and D are wrong. The question requires the serverless solution. Answer A and D introduce the non-serverless but AWS managed service, ALB. Serverless is a managed service but managed service is not necessary to be serverless. Once ALB is created no matter there is traffic or not, AWS charges ALB. Unlike serverless, lamba as example, it is charged according to the invoke. If serverless is not called then there is no change. Answer C is wrong because the custom template is not supported by HTTP API gateway. So the correct answer is B.
upvoted 1 times
...
85b5b55
2 months, 2 weeks ago
To manipulate HTTP headers, the CloudFront function is the right choice. It is lightweight and short-running. So, Ans: A.
upvoted 3 times
...
ahhatem
2 months, 3 weeks ago
Selected Answer: A
b is wrong because default responses are irrelevant. c is wrong because you can't conditionally remove headers based on agent in params mapping. d can work but is more expensive, slower..etc. Cloud functions are designed for this use case specifically!
upvoted 3 times
...
iulian0585
3 months ago
Selected Answer: A
CloudFront function can remove request hearders closer to customer thant Lambda@Edge, quicker, cheaper.
upvoted 1 times
...
nkv_3762
3 months ago
Selected Answer: A
CloudFront Function: Header manipulation – You can insert, modify, or delete HTTP headers in the request or response.
upvoted 1 times
...
higashikumi
3 months, 1 week ago
Selected Answer: A
CloudFront Functions are the optimal solution for this scenario as they are designed for lightweight tasks like header manipulation. In this case, a CloudFront Function can be easily configured to inspect the User-Agent header in incoming requests and conditionally remove unsupported headers before forwarding the request to the origin (the Application Load Balancer and Lambda functions). This approach simplifies the architecture, potentially reduces costs compared to using Lambda@Edge, and leverages the native integration between CloudFront and CloudFront Functions for efficient header modification at the edge.
upvoted 1 times
...
Zas1
3 months, 2 weeks ago
Selected Answer: A
A https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-choosing.html
upvoted 1 times
...
HishamShaikha
3 months, 3 weeks ago
Selected Answer: A
As per AWS Docs https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-choosing.html The answer is A, no need for Lambd@Edge because we want to modify the headers only
upvoted 1 times
...
Mikep12357
3 months, 3 weeks ago
Option B. Example: https://www.pulumi.com/ai/answers/1ES2CGFk1Jd9MP1LzPFXkN/customizing-aws-api-gateway-headers
upvoted 1 times
...
Fu7ed
3 months, 4 weeks ago
I'll take A as the correct answer. Requirements: Using AWS's Serverless feature, maintaining features that support previous devices, removing headers Serverless: Lambda Previous Device: ALB Remove Header: Cloudfront https://aws.amazon.com/ko/about-aws/whats-new/2023/01/amazon-cloudfront-supports-removal-response-headers/
upvoted 1 times
...
mifune
4 months, 1 week ago
Selected Answer: B
If the question asks for serverles with Lambda mi opinion is that API Gateway has to be involved. Then, the response of this service is configured according the "problematic headers". As simple as that for me.
upvoted 1 times
...
7f6aef3
4 months, 2 weeks ago
Selected Answer: D
la mejor opción sería utilizar Lambda@Edge en lugar de CloudFront Functions. Esto se debe a que Lambda@Edge ofrece una mayor flexibilidad y potencia para manipular las solicitudes y respuestas en el borde de la red de CloudFront.
upvoted 2 times
...
Weninka
4 months, 3 weeks ago
Selected Answer: D
CloudFront functions can't be triggered in to run on the response from the origin (in this case to modify the response returned by the Lambda functions), so looks like it's D. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-choosing.html
upvoted 2 times
...
4555894
5 months ago
Selected Answer: D
The other options have drawbacks: A. CloudFront function is not available: CloudFront functions are not a supported feature. B & C. API Gateway modification: These options require modifying responses at the API Gateway level. While achievable, they wouldn't process requests based on User-Agent headers before reaching the origin, potentially causing errors on older devices. By utilizing Lambda@Edge, the company can: Maintain a serverless architecture with Lambda functions for core logic. Filter out unsupported headers close to the user, preventing errors on older devices. Leverage CloudFront's caching and edge locations for improved performance.
upvoted 1 times
sse69
3 months, 2 weeks ago
"CloudFront function is not available: CloudFront functions are not a supported feature." Uhm no, CloudFront functions do exist, here's a comparison with Lambda@Edge: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-choosing.html
upvoted 1 times
...
...
gofavad926
5 months, 1 week ago
Selected Answer: A
A, you can do it with cloudfront functions or lambda@edge, but cloudfront functions is faster and cheaper...
upvoted 1 times
...
Dgix
5 months, 1 week ago
Selected Answer: A
On second thought, A.
upvoted 1 times
...
Dgix
5 months, 1 week ago
Selected Answer: D
CloudFront functions are not for this type of use case. Therefore, D.
upvoted 2 times
...
MoT0ne
5 months, 2 weeks ago
Selected Answer: B
In AWS API Gateway, a response mapping template is used to transform the output received from the backend integration into a format that is suitable for the API client. It allows you to customize the structure and content of the response before it is sent back to the client.
upvoted 1 times
...
tushar321
5 months, 2 weeks ago
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-choosing.html#:~:text=CloudFront%20Functions%20is,to%20every%20request.
upvoted 1 times
...
titi_r
5 months, 3 weeks ago
Selected Answer: A
Answer is A. CloudFront Functions is ideal for lightweight, short-running functions for use cases like the following: Header manipulation – You can insert, modify, or delete HTTP headers in the request or response. For example, you can add a True-Client-IP header to every request. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-choosing.html
upvoted 1 times
...
24Gel
5 months, 4 weeks ago
For A and C, I didn't see any point to use ALB here, the purpose is to remove headers, not increase performance.
upvoted 1 times
...
awsylum
6 months ago
The answer could be B or C. The main thing is the solution should be serverless, so that rules out ALB and CloudFront eventhough both integrate with serverless components. But, using either of them doesn't make it an entirely serverless architecture. The issue I have with the selected answer is that Parameter Mapping can be utilized in API Gateway to remove specific headers with HTTP APIs. I believe REST APIs are a superset which are more powerful, but it HTTP APIs can do the same job cheaper, then why not use HTTP APIs and select C? See this documentation for confirmation: https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-parameter-mapping.html
upvoted 4 times
awsylum
6 months ago
Actually, the more I read about it, the more it seems that the parameter mapping is only for HTTP API. That would make C the answer. These questions and answers make things even more confusing because you're not sure whether to trust the selected answers here.
upvoted 3 times
24Gel
5 months, 4 weeks ago
Agree, that is my choice too
upvoted 1 times
...
...
...
rhinozD
6 months, 1 week ago
Selected Answer: C
You guys are discussing CloudFront function and Lambda@Edge but the question says: "adopt serverless technologies". ALB is not a serverless service. I think C is the correct answer.
upvoted 6 times
...
GNB2024
6 months, 1 week ago
Selected Answer: A
I agree with A
upvoted 1 times
...
Rajarshi
6 months, 3 weeks ago
Ans D because Cloudfront Funtion can not modify headers
upvoted 1 times
hogtrough
6 months, 2 weeks ago
"CloudFront can remove headers that it received from the origin, or add headers to the response, before sending the response to viewers." https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/modifying-response-headers.html
upvoted 1 times
...
rhinozD
6 months, 1 week ago
Actually, It can https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-choosing.html
upvoted 2 times
...
...
SwapnilAWS
6 months, 3 weeks ago
option A : https://dev.to/aws-heroes/cloudfront-functions-vs-lambdaedge-whats-the-difference-1g60#:~:text=Scale%3A%20CloudFront%20Functions%20can%20scale,128MB%20%2D%203GB%20of%20memory%20available.
upvoted 1 times
...
master9
7 months, 1 week ago
Selected Answer: A
To migrate the service to AWS, adopt serverless technologies, and retain the ability to support the older devices, the company can use AWS Application Load Balancer (ALB). The ALB is a serverless technology that can be used to route incoming traffic to serverless functions such as AWS Lambda. The Serverless Framework makes it possible to set up the connection between Application Load Balancers and Lambda functions with the help of the alb event. To support the older devices, the company can configure the ALB to remove the unsupported headers from responses sent to older devices, which the company identified by the User-Agent headers. The ALB’s focus on HTTP allows it to use parts of the protocol to make decisions about caching and save you some Lambda executions.
upvoted 1 times
...
ftaws
7 months, 2 weeks ago
Selected Answer: A
See this link : https://aws.amazon.com/ko/blogs/korea/introducing-cloudfront-functions-run-your-code-at-the-edge-with-low-latency-at-any-scale/ CloudFront Function more cheaper and fast.
upvoted 1 times
...
buriz
7 months, 3 weeks ago
Selected Answer: D
its option D because chat gpt says so
upvoted 4 times
...
liux99
7 months, 4 weeks ago
I mean B is the correct answer.
upvoted 2 times
...
liux99
7 months, 4 weeks ago
A, D are not correct because ALB needs to go through API gateway to invoke lambda. API gateway is the right place to remove the unsupported headers for old devices, so C is the right answer.
upvoted 1 times
rhinozD
6 months, 1 week ago
Wrong, ALB can invoke lambda directly. https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html
upvoted 1 times
...
...
blackgamer
8 months, 2 weeks ago
The answer is A. You can refer to AWS documentation here - https://aws.amazon.com/blogs/aws/introducing-cloudfront-functions-run-your-code-at-the-edge-with-low-latency-at-any-scale/
upvoted 4 times
...
ayadmawla
8 months, 2 weeks ago
Selected Answer: A
Yes you can do it with Lambda@Edge but CloudFront Function can do it quicker and cheaper. https://dev.to/aws-heroes/cloudfront-functions-vs-lambdaedge-whats-the-difference-1g60#:~:text=Lambda%40Edge%20functions%20have%20128MB,request%20and%20origin%20response%20triggers).
upvoted 2 times
...
kaby1987
8 months, 2 weeks ago
Selected Answer: D
Ans is D
upvoted 1 times
...
ninomfr64
8 months, 3 weeks ago
Selected Answer: D
This is really challenging for me. Here is my reasoning: i) user-agent header is stored in request and not in answer ii) based on i) we need a mechanism to map sessionid to user-agent in requests and access this mapping when processing answers Not .A as CF Functions do not interact with other AWS services, they can use key value pairs but in read-only mode. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/functions-tutorial-kvs.html Not B. as gateway responses only works with "supported response type" listed here https://docs.aws.amazon.com/apigateway/latest/developerguide/supported-gateway-response-types.html (plese note the question mention errors, but they occur on devices) Not C. as response mapping template do not interact with other AWS services D. is correct as Lambda@Edge can access other AWS services (e.g. in this case a DynamoDB for sessionid user-agent mapping)
upvoted 6 times
...
AC1984
8 months, 3 weeks ago
Selected Answer: D
Lambda@Edge can edit header
upvoted 1 times
ninomfr64
8 months, 3 weeks ago
Agree on D, but also CloudFront Function can manipulate headers https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html#:~:text=cache%20hit%20ratio.-,Header%20manipulation,-%E2%80%93%20You%20can%20insert
upvoted 1 times
...
...
atirado
9 months ago
Selected Answer: C
I think thre are two problems in this situation: 1- "Many older devices do not support certain HTTP headers and exhibit errors when these headers are present in responses." 2- "The company has already migrated the applications into a set of AWS Lambda functions" Those two problems are addressed by using an API Gateway which 'selects the appropriate function for each type of request' and a mapping template which 'removes the unsupported headers'.
upvoted 3 times
...
abeb
9 months ago
A Create an Amazon CloudFront distribution
upvoted 1 times
...
KevinYao
9 months ago
Selected Answer: B
ALB is not serverless, http api can't remove a header.
upvoted 4 times
...
edder
9 months, 1 week ago
Selected Answer: B
The answer is B. C: Obviously wrong. A, D: From the first half of the sentence, it cannot be considered a global deployment service, so Cloudfront is not necessary. B is correct because it also meets the serverless requirements.
upvoted 4 times
...
BECAUSE
9 months, 1 week ago
Selected Answer: D
D is the answer, Lambda@Edge in option D appears to offer more targeted and suitable functionality for header modification at the edge
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: B
Looks like B. ALB is not serverless, and HTTP API won't let you remove a header based on conditional logic, hence not C. https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-override-request-response-parameters.html#:~:text=To%20conditionally%20remap%20a%20parameter%20based%20on%20its%20contents%20or%20the%20contents%20of%20some%20other%20parameter
upvoted 2 times
...
Pupu86
9 months, 4 weeks ago
CloudFront functions is used at the edge cache location and meant for functions that doesn't require body access with very short execution time of max 1ms. Lambda@edge can only be deployed at regional cache location with up to 5 secs execution time and is usually implemented when there is interaction needed with the body request. so answer should be A.
upvoted 2 times
...
totten
10 months ago
Selected Answer: D
It's confusing because CloudFront Function and Lambda@Edge can modify responses based on request headers. Here is what CloudFront function documentation says: "Header manipulation – You can insert, modify, or delete HTTP headers in the request or response. For example, you can add a True-Client-IP header to every request." (https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html). Here is what Lambda@Edge documentation says: "CloudFront can return different objects to viewers based on the device they're using by checking the User-Agent header, which includes information about the devices." (https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-the-edge.html) I would vote for D just because Lambda@Edge had started supporting this use case a few years before CloudFront function appeared.
upvoted 5 times
...
Chainshark
10 months, 3 weeks ago
"Serverless technologies feature automatic scaling, built-in high availability, and a pay-for-use billing model to increase agility and optimize costs." https://aws.amazon.com/serverless/ AWS ALB is not pay as you go, so it is not serverless. Thus A and D are wrong. Thus, answer C is correct (C. https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-parameter-mapping.html)
upvoted 1 times
...
rlf
10 months, 3 weeks ago
C. https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-parameter-mapping.html
upvoted 1 times
...
devourer66
11 months ago
For options A and D, can someone explain how lambda or CF function that should be processing a response can get information about the headers in the original request to actually decide if the transformation is required?
upvoted 2 times
ohcn
10 months, 3 weeks ago
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html
upvoted 1 times
...
...
ansgohar
11 months ago
Selected Answer: A
A. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a CloudFront function to remove the problematic headers based on the value of the User-Agent header.
upvoted 2 times
...
vestersly
11 months, 2 weeks ago
Selected Answer: B
The answer is certainly B ,We must consider the serverless requirement in the new design.
upvoted 5 times
...
cheese929
11 months, 3 weeks ago
Selected Answer: A
Both CloudFront function and Lambda@Edge can do the job. but CloudFront function can do it at approximately 1/6th the price of Lambda@Edge. Thus I go for A.
upvoted 1 times
...
Melampos
11 months, 3 weeks ago
Selected Answer: B
ALB in answer cannot fit with requeriment (servless).
upvoted 2 times
Chainshark
10 months, 3 weeks ago
It can, Lambda can be a target of ALBs.
upvoted 2 times
...
...
awsent
11 months, 3 weeks ago
Answer: D Due to service is being used by devices, using CloudFront. API Gateway is a regional service and request could experience latency. CloudFront Functions are executed prior to request to Edge Cache, this scenario requires changes to the response header, hence Lambda@Edge.
upvoted 2 times
BeanDev
10 months ago
Dude, CF func can also be executed to modify the Viewer responses, not just requests ;)
upvoted 1 times
...
...
dimitry_khan_arc
1 year ago
Selected Answer: B
Cloudfront could have been a choice but as soon as it talks about ALB the requirement to keep serverless is compromised. So, B is the answer.
upvoted 1 times
...
autobahn
1 year ago
It has to be B since ALB is not a serverless service. The company prefers a serverless architecture. Also, the requirement doesn't talk about Caching or reducing Latency. So, A & D cannot be the right choice.
upvoted 1 times
...
chico2023
1 year ago
Selected Answer: A
Interestingly, the link shared by Karamen (https://aws.amazon.com/blogs/aws/introducing-cloudfront-functions-run-your-code-at-the-edge-with-low-latency-at-any-scale/) points to answer A. CloudFront functions include: "HTTP header manipulation: View, add, modify, or delete any of the request/response headers." This helps giving weight to answer A as the correct one.
upvoted 2 times
...
Karamen
1 year, 1 month ago
the correct answer is D. https://aws.amazon.com/blogs/aws/introducing-cloudfront-functions-run-your-code-at-the-edge-with-low-latency-at-any-scale/
upvoted 1 times
RotterDam
1 year ago
Why not (A)? They both can do the same thing - only CF Functions can do it much earlier at the Edge Locatgion much closer to the user BEFORE it hits Regional Cache (Lambda@Edge work at the Regional Cache)
upvoted 2 times
...
...
Russs99
1 year, 1 month ago
Selected Answer: D
As to A, Amazon CloudFront does not provide built-in capabilities to directly remove or modify HTTP headers
upvoted 2 times
RotterDam
1 year ago
Says who? CF functions do this very well as announced in their Blogs The second category of use cases are simple HTTP(s) request/response manipulations that can be executed by very short-lived functions. For these use cases, you need a flexible programming experience with the performance, scale, and cost-effectiveness that enable you to execute them on every request. To help you with this second category of use cases, I am happy to announce the availability of CloudFront Functions, a new serverless scripting platform that allows you to run lightweight JavaScript code at the 218+ CloudFront edge locations at approximately 1/6th the price of Lambda@Edge. https://aws.amazon.com/blogs/aws/introducing-cloudfront-functions-run-your-code-at-the-edge-with-low-latency-at-any-scale/
upvoted 2 times
...
...
allen_devops
1 year, 1 month ago
Option A is correct. For option B and C, they doesn't support mapping based on headers. It only concern payload, context and stage. For Option D, it should be associate with viewer request. It should be viewer response.
upvoted 1 times
allen_devops
1 year, 1 month ago
To correct myself, Data Mapping is only available for REST API, not HTTP so C is wrong. For option B, default gateway response is used to responded with an error.
upvoted 1 times
...
...
Jonalb
1 year, 1 month ago
Selected Answer: A
https://trackit.io/cloudfront-functions-vs-lambdaedge-which-one-should-you-choose/
upvoted 3 times
...
Magoose
1 year, 1 month ago
Selected Answer: D
Option A is incorrect because using a CloudFront function to remove headers is not possible. CloudFront functions do not have the capability to modify headers in response to viewer requests.
upvoted 2 times
...
davidcc8g
1 year, 1 month ago
just wonder if in real exam will be such case? the answer is wrong, but we selected correct one
upvoted 1 times
RotterDam
1 year ago
I believe this is an actual exam question mate...
upvoted 1 times
...
...
Mom305
1 year, 1 month ago
A you can configure a lambda@Edge but now you can now set headers with CloudFront Functions
upvoted 1 times
...
Jonalb
1 year, 2 months ago
Selected Answer: A
A. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a CloudFront function to remove the problematic headers based on the value of the User-Agent header
upvoted 1 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: D
It's D. CF functions can only modify client req/resp, not the origin, which is required here
upvoted 3 times
...
javitech83
1 year, 2 months ago
Selected Answer: A
I go with A, cloud front has ability to remove /modify headers
upvoted 1 times
...
Jonalb
1 year, 2 months ago
Selected Answer: D
In this solution, you use CloudFront, ALB, Lambda@Edge, and Lambda functions to achieve the desired outcome. Create an Amazon CloudFront distribution: CloudFront acts as a content delivery network (CDN) and allows you to distribute your metadata service globally. You can configure it to handle incoming requests and route them to the appropriate backend. Create an Application Load Balancer (ALB): ALB is used as a target for CloudFront to forward requests. It provides advanced routing capabilities and can invoke the correct Lambda function based on the type of request. Create a Lambda@Edge function: Lambda@Edge allows you to run Lambda functions at the CloudFront edge locations, closer to your users. Create a Lambda@Edge function that examines the User-Agent header of incoming requests and removes the problematic headers from the response when necessary. This ensures compatibility with older devices.
upvoted 2 times
...
Jesuisleon
1 year, 2 months ago
Selected Answer: B
I don't understand why you guys choose CLoudFront. The datastream flows from Consumer devices to aws, why we need to use cloudfront to cache content from alb ? I choose B as Amazon API Gateway HTTP APIs do not support mapping templates.
upvoted 5 times
CloudHandsOn
1 year ago
The question also states SERVERLESS. This is a strong indicator to go with SVLS technologies if possible. ALB, EC2, etc. are not SVLS
upvoted 1 times
...
...
easytoo
1 year, 2 months ago
d-d-d-dd-d-d-d
upvoted 1 times
...
ailves
1 year, 2 months ago
Selected Answer: B
We don't need ALB to invoke the correct Lambda function for each type of request.
upvoted 1 times
ailves
1 year, 2 months ago
I'm wrong. The right answer is D
upvoted 1 times
...
...
Roontha
1 year, 2 months ago
Answer : A. why we need to Lambda@edge while cloudfront serving the purpose, also CF is much faster and light weight
upvoted 2 times
...
sghdfghdfghdghdfh4w56346he346h
1 year, 3 months ago
Selected Answer: A
A. Only the viewer response (response header) needs to be modified. CloudFront function and Lambda functions can both do this, but CloudFront functions are better suited for lighter functions at the edge such as this.
upvoted 4 times
...
rtguru
1 year, 3 months ago
I go with A, cloud front has ability to remove /modify headers
upvoted 1 times
...
dev112233xx
1 year, 3 months ago
Selected Answer: C
C is the correct answer... HTTP API is cheaper and faster than REST Api and in this case i think it does the job. we don't really need a CF or "Lambda@Edge" for such scenario!!!
upvoted 3 times
dev112233xx
1 year, 3 months ago
seems like HTTP API doesn't support mapping template, so i change my answer to B (REST Api)
upvoted 1 times
...
...
toshayxlol
1 year, 3 months ago
Selected Answer: A
A. Both Lambda@Edge and CloudFront functions can modify headers but for this particular case CloudFront functions is the most suitable choice because of the more simplistic approach than using Lambda@Edge which fits better for more complext tasks. Also, CloudFront functions run at edge locations intead of regional edge locations like Lambda@Edge so you will execute code even closer to the user. https://medium.com/trackit/cloudfront-functions-vs-lambda-edge-which-one-should-you-choose-c88527647695
upvoted 3 times
...
tito0207
1 year, 3 months ago
chatgpt answer D
upvoted 3 times
...
y0eri
1 year, 3 months ago
Selected Answer: A
Seems to be possible with a CloudFront Function, and there is no need for Lambda@Edge functionality such as network access. See https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/writing-function-code.html#function-code-modify-response where you see the response can be changed based on the request.
upvoted 2 times
...
AMEJack
1 year, 3 months ago
Selected Answer: D
A trick here Question says "The company has configured an on-premises load balancer to remove the unsupported headers from responses sent to older devices, which the company identified by the User-Agent headers." This means it needs to change the response from the origin which can only be done by lambda@Edge.
upvoted 5 times
chikorita
1 year, 3 months ago
my exact thoughts....but still confused
upvoted 1 times
...
...
zijieli
1 year, 4 months ago
B is the answer . Why? The Question describes that the application used for devices real time visit so CloudFront isn't the right choice ,CloudFront mainly used for edge caching. Other functional like CF functions and Lambda@Edge just provide more flexible for user's . CloudFront isn't the typical scenario for Device metadata collect due to they require an nearly real time query
upvoted 5 times
...
EthicalBond
1 year, 4 months ago
Selected Answer: A
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html Header manipulation – You can insert, modify, or delete HTTP headers in the request or response.
upvoted 2 times
...
OCHT
1 year, 4 months ago
Selected Answer: C
Option A involves creating an Amazon CloudFront distribution and an Application Load Balancer (ALB) to forward requests to the ALB and invoke the correct Lambda function for each type of request. However, it suggests using a CloudFront function to remove the problematic headers based on the value of the User-Agent header. CloudFront functions are used to manipulate the request and response that are generated by CloudFront. They are not designed to manipulate the headers of responses that are generated by other AWS services such as ALB or Lambda. In contrast, option C suggests using Amazon API Gateway and its built-in functionality to manipulate response headers using a response mapping template. This is a more straightforward and efficient way to remove the problematic headers from responses sent to older devices.
upvoted 8 times
...
2aldous
1 year, 4 months ago
A. "To give you the performance and scale that modern applications require, CloudFront Functions uses a new process-based isolation model instead of virtual machine (VM)-based isolation as used by AWS Lambda and Lambda@Edge. To do that, we had to enforce some restrictions, such as avoiding network and file system access. Also, functions run for less than one millisecond. In this way, they can handle millions of requests per second while giving you great performance on every function execution. Functions add almost no perceptible impact to overall content delivery network (CDN) performance." Also, CF Functions can View, add, modify, or delete any of the request/response headers.
upvoted 1 times
...
Don2021
1 year, 4 months ago
Answer is A.
upvoted 1 times
...
OCHT
1 year, 4 months ago
Selected Answer: D
It's too heavy workload for Cloudfront. Not much for header analysis.
upvoted 2 times
...
dbacks5439
1 year, 4 months ago
Selected Answer: B
It has to be B due to the application moving from on-prem servers to Lambda functions. You don't need load balancers for Lambdas, hence the CF answers are wrong because they reference load balancers. You have to use an API gateway for REST.
upvoted 5 times
...
Jacky_exam
1 year, 4 months ago
Selected Answer: D
GPT: The solution that will meet these requirements is D. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a Lambda@Edge function that will remove the problematic headers in response to viewer requests based on the value of the User-Agent header. Explanation: Option A is not correct because CloudFront functions can only be used to modify the request, not the response. Also, the question asks to remove headers from the response, not from the request.
upvoted 3 times
scuzzy2010
1 year, 4 months ago
Don't trust GPT. "Header manipulation – You can insert, modify, or delete HTTP headers in the request or response" - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html
upvoted 4 times
...
...
mfsec
1 year, 5 months ago
Selected Answer: A
Cloudfront can do it.
upvoted 2 times
...
ramyaram
1 year, 5 months ago
Selected Answer: A
CloudFront functions are very light weight and most efficient for this use case
upvoted 2 times
...
scuzzy2010
1 year, 5 months ago
Selected Answer: A
"Header manipulation – You can insert, modify, or delete HTTP headers in the request or response." - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html
upvoted 5 times
...
gameoflove
1 year, 5 months ago
Selected Answer: B
B as per the question that non Http Device need response other than HTTP
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: A
Confused between A and D , but will go with A in the exam based on below explainatins https://medium.com/trackit/cloudfront-functions-vs-lambda-edge-which-one-should-you-choose-c88527647695 https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions.html
upvoted 2 times
...
higashikumi
1 year, 5 months ago
D is correct This solution uses Amazon CloudFront with an Application Load Balancer (ALB) and AWS Lambda@Edge to remove problematic headers based on the User-Agent header. CloudFront can be used as a content delivery network (CDN) to deliver the metadata service to consumer devices while the ALB is used to invoke the correct Lambda function for each type of request. Lambda@Edge is used to modify the response headers in real-time based on the User-Agent header. This solution addresses the requirement to support older devices that do not support certain HTTP headers by removing problematic headers based on the value of the User-Agent header. It also leverages serverless technologies such as AWS Lambda and Lambda@Edge for scalability and cost-effectiveness.
upvoted 3 times
...
Appon
1 year, 5 months ago
In the question its stated that "The company wants to migrate the (metadata) service to AWS..." In the answers involving CF, there is no mention of migrating metadata service...am I missing something?
upvoted 1 times
...
c73bf38
1 year, 6 months ago
Selected Answer: A
Per the feature comparisons between Lambda and CloudFront functions, A is the correct option as it clearly states it does header manipulation for the response headers and requests. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions.html
upvoted 2 times
...
dev112233xx
1 year, 6 months ago
Selected Answer: A
A is the correct answer. Cloudfront functions (not @Edge) are suited for such light weight tasks and very important they are cheaper than Cloudfront@Edge which costs x3 the price of the Cloudfront function.
upvoted 3 times
...
Mahakali
1 year, 6 months ago
Selected Answer: A
Cloudfront function is the suitable option as it is mentioned as ideal for header manipulations. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html
upvoted 3 times
...
spd
1 year, 6 months ago
Selected Answer: D
Lambda@Edge can modify headers https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html
upvoted 1 times
...
PSPaul
1 year, 6 months ago
It's should be D
upvoted 1 times
...
ospherenet
1 year, 6 months ago
A is the correct answer. Explanation: CloudFront is a good option for delivering content and improving user experience with caching, reducing latency and increasing availability. An Application Load Balancer (ALB) can be used with CloudFront to route requests to the correct Lambda function. The CloudFront function can be used to remove the problematic headers based on the User-Agent header to support older devices. Using CloudFront and Lambda functions will allow the company to adopt serverless technologies for this use case.
upvoted 1 times
...
c73bf38
1 year, 6 months ago
Selected Answer: D
D: This solution involves creating an Amazon CloudFront distribution for the metadata service and configuring it to forward requests to the Application Load Balancer (ALB), which is used to invoke the correct Lambda function for each type of request. A Lambda@Edge function should be created that will remove the problematic headers in response to viewer requests based on the value of the User-Agent header. This approach allows the company to remove the problematic headers while supporting older devices and using serverless technologies.
upvoted 3 times
...
SubbuKhan
1 year, 6 months ago
Selected Answer: D
Lambda@Edge lets you run Lambda functions to customize the content that CloudFront delivers, executing the functions in AWS locations closer to the viewer. The functions run in response to CloudFront events, without provisioning or managing servers. You can use Lambda functions to change CloudFront requests and responses at the following points: - After CloudFront receives a request from a viewer (viewer request) - Before CloudFront forwards the request to the origin (origin request) - After CloudFront receives the response from the origin (origin response) - Before CloudFront forwards the response to the viewer (viewer response)
upvoted 1 times
RotterDam
1 year ago
CF Functions can do this as well - why use Lambda@Edge when you can do it at 1/6th the price with CF Functions?
upvoted 1 times
...
...
Ilk
1 year, 6 months ago
Selected Answer: A
CF function or lambda edge can do it. But the Cf function is faster and cheaper. So it is A
upvoted 2 times
...
sergza
1 year, 6 months ago
Selected Answer: A
For Simple header Manipulation without need of body access use of CF functions i guess is more appropriate than Lambda@Edge.
upvoted 3 times
...
lobana
1 year, 7 months ago
Selected Answer: A
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html With CloudFront Functions in Amazon CloudFront, you can write lightweight functions in JavaScript for high-scale, latency-sensitive CDN customizations. Your functions can manipulate the requests and responses that flow through CloudFront, perform basic authentication and authorization, generate HTTP responses at the edge, and more.
upvoted 1 times
...
masetromain
1 year, 7 months ago
Selected Answer: D
D. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a Lambda@Edge function that will remove the problematic headers in response to viewer requests based on the value of the User-Agent header. This solution would allow the company to use CloudFront as a CDN to improve the performance of the service, and use Lambda@Edge to remove the problematic headers, allowing older devices to access the service without errors. The ALB can route requests to the correct Lambda function based on the request type.
upvoted 1 times
...
eraser2021999
1 year, 7 months ago
Selected Answer: D
D as per explanations of Stephane's Udemy training.
upvoted 2 times
Ilk
1 year, 6 months ago
I read an answer on the QA part of that course. In that answer, he stated that it can be also performed by the CF function. In addition, CF functions are cheaper and faster. So it can be A
upvoted 2 times
...
...
mmendozaf
1 year, 8 months ago
Selected Answer: C
As most of the logic are related with user agent headers, API Gateway have more capabilities. This discard, A,D. Between B and C, request is only to delete the header to specific User agents and not by default, discarding option B. https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html
upvoted 8 times
...
hobokabobo
1 year, 8 months ago
Selected Answer: D
A) While cloudfront is able to remove headers it will not be able to that conditionally depending on another header(user agent). B) Api Gateway can add and modify headers but not remove them. C)HTTP Gateway: we want dynamic, an api. -> Wrong from the beginning. D) lamda@edge can remove headers and as it is code it can do it based on conditions
upvoted 2 times
Sarutobi
1 year, 6 months ago
I think that "Api Gateway can add and modify headers but not remove them." is not correct. https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-parameter-mapping.html, scroll down for a "Transforming API requests" there is a table with the options you can see "append|overwrite|remove:header.headername" so you can delete headers with API Gateway.
upvoted 2 times
hobokabobo
1 year, 4 months ago
Thanks. I looked at templates which miss the functionality. Still do not see how to implement it with mapping. But if you say got it working using mapping, I guess it does. Thanks.
upvoted 1 times
...
...
...
pppttl
1 year, 8 months ago
Selected Answer: A
A, because it's faster. CloudFront Functions vs. Lambda@Edge use cases * CF Functions: * cache key normalization: transform request attributes (headers, cookies, query string, URL) to create an optimal Cache Key * header manipulation: insert/modify/delete HTTP headers in the request or response * URL rewrites or redirects * request authentication & authorization: create and validate user-generated tokens (e. g. JWT) to allow/deny requests * Lambda@Edge: * longer execution time (several ms) * adjustable CPU or memory * 3rd party dependencies (like AWS SDK) * network access to use external services for processing * file system access or access to the body of HTTP requests
upvoted 3 times
...
Untamables
1 year, 8 months ago
Selected Answer: D
Vote D. A and D can modify headers programmably. AWS mentions Lambda@Edge supports modifying headers based on User-Agent value in their document. Option B is wrong. It is just able to override.
upvoted 1 times
...
WuKongCoder
1 year, 8 months ago
B is correct answer, on premises mean can't use cloudfornt, api gateway http api can't support response mapping template https://docs.aws.amazon.com/zh_cn/apigateway/latest/developerguide/http-api-vs-rest.html
upvoted 1 times
Cloud_noob
1 year, 8 months ago
I think the question is saying they are migrating the on-premises services to AWS and the application is already migrated to Lambda. Why can't use cloudfront?
upvoted 1 times
...
...
karysff
1 year, 8 months ago
Selected Answer: B
API gateway can rewrite header https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-override-request-response-parameters.html
upvoted 2 times
hobokabobo
1 year, 8 months ago
yes: rewrite and add but imo it cannot remove a header.
upvoted 2 times
Arnaud92
12 months ago
yes it does : https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-parameter-mapping.html
upvoted 1 times
...
...
...
robertohyena
1 year, 8 months ago
Answer is D. - Lambda@Edge can remove problematic headers - API gateway can only do request / response transformation
upvoted 7 times
...
Question #6 Topic 1

A retail company needs to provide a series of data files to another company, which is its business partner. These files are saved in an Amazon S3 bucket under Account A, which belongs to the retail company. The business partner company wants one of its IAM users, User_DataProcessor, to access the files from its own AWS account (Account B).
Which combination of steps must the companies take so that User_DataProcessor can access the S3 bucket successfully? (Choose two.)

  • A. Turn on the cross-origin resource sharing (CORS) feature for the S3 bucket in Account A.
  • B. In Account A, set the S3 bucket policy to the following:
  • C. In Account A, set the S3 bucket policy to the following:
  • D. In Account B, set the permissions of User_DataProcessor to the following:
  • E. In Account B, set the permissions of User_DataProcessor to the following:
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (63%)
D (34%)
2%

robertohyena
Highly Voted 1 year, 8 months ago
Answer: C & D Source: https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-s3/ https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example4.html
upvoted 30 times
...
higashikumi
Highly Voted 1 year, 5 months ago
C & D To allow User_DataProcessor to access the S3 bucket from Account B, the following steps need to be taken: In Account A, set the S3 bucket policy to allow access to the bucket from the IAM user in Account B. This is done by adding a statement to the bucket policy that allows the IAM user in Account B to perform the necessary actions (GetObject and ListBucket) on the bucket and its contents. In Account B, create an IAM policy that allows the IAM user (User_DataProcessor) to perform the necessary actions (GetObject and ListBucket) on the S3 bucket and its contents. The policy should reference the ARN of the S3 bucket and the actions that the user is allowed to perform. Note: turning on the cross-origin resource sharing (CORS) feature for the S3 bucket in Account A is not necessary for this scenario as it is typically used for allowing web browsers to access resources from different domains.
upvoted 18 times
...
dEgYnIDA
Most Recent 1 month, 1 week ago
Selected Answer: D
The question says Choose two. The answer is C & D.
upvoted 1 times
...
kpcert
2 months, 2 weeks ago
Selected Answer: C
Ans C and D 2 Options have to be selected
upvoted 1 times
...
kpcert
2 months, 2 weeks ago
Ans - C and D 2 Options have to be selected
upvoted 1 times
...
MoT0ne
5 months, 2 weeks ago
Selected Answer: C
Cross-Origin Resource Sharing (CORS) is a security feature in Amazon S3 that allows you to control access to your S3 resources from a different domain (origin) than the one serving the resources. CORS defines a way for client web applications running in one origin to interact with resources in a different origin, which is otherwise restricted by the same-origin policy enforced by web browsers.
upvoted 1 times
...
Dgix
5 months, 4 weeks ago
C and D.
upvoted 1 times
...
awsylum
6 months ago
The answer is C and D. You need to give the IAM User in Account B an IAM Policy and you need to give a Bucket Policy in Account A. Who is maintaining this database of questions? Someone needs to seriously set the correct answers before making a lot of people confused and potentially screw up their exam.
upvoted 1 times
...
chelbsik
6 months, 3 weeks ago
Selected Answer: D
Correct answer: C and D Adding my vote for D to balance the result Moderator, please fix the vote in this ticket.
upvoted 1 times
...
ftaws
6 months, 3 weeks ago
why we need two steps? I think that we get only one from resource-based policy or identity-based policy.
upvoted 1 times
...
Vaibs099
7 months ago
Answer C & D
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: C
Option A - CORS does not address cross-account access to S3 buckets Option B - This option would not work because the bucket policy is missing the Principal Option C - This option provides a valid S3 bucket policy that grants access to User_DataProcessor Option D - These permissions allow User_DataProcessor to get objects out of the bucket Option E - This option would not work because it is not a valid IAM policy
upvoted 1 times
...
shaaam80
8 months, 3 weeks ago
Selected Answer: C
Answer - C & D
upvoted 2 times
...
severlight
9 months, 2 weeks ago
Selected Answer: D
C, D. D and not E, because it is an identity-based inline policy already attached to the specific principal.
upvoted 4 times
...
alonis2201
9 months, 3 weeks ago
A,C Access setting need to be done only on Account A as it's an owner. So Enabling Cross origin access and access to the bucket for account B IAM user.
upvoted 2 times
...
rlf
10 months, 1 week ago
Answer : C&D.
upvoted 2 times
...
puffetor
11 months ago
Hello I've just tested it on my AWS account to be 100% sure. Correct answer in C & D. Only C is enough only for same account access, but for cross-account like in this case D is needed too, otherwise it does not work.
upvoted 4 times
...
ansgohar
11 months ago
Selected Answer: C
Answer: C
upvoted 2 times
...
career360guru
11 months, 3 weeks ago
A & C are the right answer
upvoted 3 times
...
[Removed]
1 year, 1 month ago
C & D: first allow the b account user to get access to the bucket objects and list. then on the b account give the user the permissions to do that
upvoted 2 times
...
NikkyDicky
1 year, 2 months ago
C&D. can only vote for one? lol
upvoted 2 times
...
BasselBuzz
1 year, 2 months ago
Selected Answer: D
C and D for sure
upvoted 2 times
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: D
Answer: C & D Source: https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-s3/ https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example4.html
upvoted 2 times
...
rbm2023
1 year, 3 months ago
Selected Answer: C
C AND D C. In Account A, set the S3 bucket policy to the following: "Effect": "Allow", "Principal" : { "AWS" : "arn:aws:iam::AccountB:user/User_DataProcessor" ) , "Action": [ "s3 : GetObject", "s3 : ListBucket" ], "Resource": ( "arn:aws:s3:::AccountABucketName/*" D. In Account B, set the permissions of User_DataProcessor to the following: "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": "arn:aws:s3:::AccountABucketName/*" These steps allow the IAM user User_DataProcessor from Account B to access the S3 bucket in Account A by granting the appropriate permissions.
upvoted 2 times
...
rtguru
1 year, 3 months ago
C&D is the correct answer
upvoted 2 times
...
AmitB
1 year, 3 months ago
Answer is C& D Ref https://repost.aws/knowledge-center/cross-account-access-s3
upvoted 2 times
...
iamunstopable
1 year, 4 months ago
Answer: C & D are correct
upvoted 2 times
...
EthicalBond
1 year, 4 months ago
Selected Answer: C
Doesn't make sense for account B to control access to resources in account A. So D is NOT the answer. Account A owns the bucket and sets the bucket policy to allow access to a principal/user in Account B
upvoted 2 times
momo3321
1 year, 4 months ago
Nope, this is the multiple answers question and in this case, it's required the performing from both way (Account A & Account B), doesn't work if only Account B open the policy to the bucket which belong to Account A
upvoted 1 times
...
...
Don2021
1 year, 4 months ago
C & D - 100%
upvoted 2 times
...
elad18
1 year, 4 months ago
Selected Answer: C
C & D. But the ListBucket action won't work as you need to mention the arn of the bucket itself as well (without the /*)
upvoted 2 times
...
OCHT
1 year, 4 months ago
Selected Answer: C
C & D.
upvoted 1 times
...
hpipit
1 year, 5 months ago
Selected Answer: C
C and D, 100%
upvoted 1 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: C
C+D no doubts
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: C
C + D are right
upvoted 1 times
...
gameoflove
1 year, 5 months ago
Selected Answer: C
I would select C as Account A need to grant access
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: C
going with C and D
upvoted 1 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: D
C & D are the correct answers ✅
upvoted 2 times
...
Ajani
1 year, 5 months ago
Two ways for Cross account permissions is either through bucket policies or using IAM role. With Bucket Policy you need; and for this question , a user policy is required to delegate access to the user in the partner account. A bucket policy and a userpolicy. and bucket policy will include an arn https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example4.html#access-policies-walkthrough-example4-overview C: Bucket Policy in account A D: User Policy in Account B
upvoted 2 times
...
vandergun
1 year, 6 months ago
Selected Answer: C
c&D for sure
upvoted 1 times
...
DWsk
1 year, 6 months ago
Selected Answer: D
I think the answer is C & D. But what's with E? You don't need the principal, but it would still work, right?
upvoted 2 times
...
skashanali
1 year, 7 months ago
Selected Answer: C
Allow specific user and specific actions on the mentioned S3 bucket is the right way. We always think of fine grain access.
upvoted 1 times
...
Teknoklutz
1 year, 7 months ago
Selected Answer: C
C and E
upvoted 1 times
...
mmendozaf
1 year, 8 months ago
Selected Answer: C
Permissions is required to provide on the source component, at least.
upvoted 1 times
...
hobokabobo
1 year, 8 months ago
Selected Answer: A
It says choose two. C&A C grants access and A whitelists the different domain.
upvoted 1 times
hobokabobo
1 year, 4 months ago
Stupid me, if only I could read: C and D are the necessary policies.
upvoted 1 times
...
...
skashanali
1 year, 8 months ago
Selected Answer: C
Ans C, is for the S3 CORS bucket policy and Ans D, for the User permission set to allow S3 bucket
upvoted 1 times
...
Arun_Bala
1 year, 8 months ago
Selected Answer: C
Ans C & D
upvoted 2 times
...
masetromain
1 year, 8 months ago
examtopics misedited the question "(Choose two.)" I would answer CD
upvoted 4 times
...
Question #7 Topic 1

A company is running a traditional web application on Amazon EC2 instances. The company needs to refactor the application as microservices that run on containers. Separate versions of the application exist in two distinct environments: production and testing. Load for the application is variable, but the minimum load and the maximum load are known. A solutions architect needs to design the updated application with a serverless architecture that minimizes operational complexity.
Which solution will meet these requirements MOST cost-effectively?

  • A. Upload the container images to AWS Lambda as functions. Configure a concurrency limit for the associated Lambda functions to handle the expected peak load. Configure two separate Lambda integrations within Amazon API Gateway: one for production and one for testing.
  • B. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Container Service (Amazon ECS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the ECS clusters.
  • C. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Kubernetes Service (Amazon EKS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the EKS clusters.
  • D. Upload the container images to AWS Elastic Beanstalk. In Elastic Beanstalk, create separate environments and deployments for production and testing. Configure two separate Application Load Balancers to direct traffic to the Elastic Beanstalk deployments.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (81%)
Other

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: B
B. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Container Service (Amazon ECS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the ECS clusters. This option meets the requirement of using a serverless architecture by utilizing the Fargate launch type for the ECS clusters, which allows for automatic scaling of the containers based on the expected load. It also allows for separate deployments for production and testing by configuring separate ECS clusters and Application Load Balancers for each environment. This option also minimizes operational complexity by utilizing ECS and Fargate for the container orchestration and scaling.
upvoted 19 times
...
zhangyu20000
Highly Voted 1 year, 8 months ago
Answer is A. ABC all works but A is most COST EFFECTIVE
upvoted 13 times
masetromain
1 year, 8 months ago
Is true but " you can now package and deploy Lambda functions as container images of up to 10 GB in size." the size is not specified, personally I find it too small
upvoted 3 times
anita_student
1 year, 6 months ago
10GB image is too small for what? I'm curious how do you containerise those images? I'd say the average image size is ~300-400MB
upvoted 3 times
...
...
zhangyu20000
1 year, 8 months ago
https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/
upvoted 3 times
...
anita_student
1 year, 6 months ago
Yes, would be cheap, but can't run a web app from Lambda
upvoted 5 times
...
yuyuyuyuyu
1 year, 7 months ago
I do not think A is the right answer. Because image must be upload to the ECR.
upvoted 3 times
...
MansaMunsa
1 year, 5 months ago
A) is not correct. AWS documentation says you can package and deploy Lambda functions AS container images. A) says Deploy Container images as lambda functions, the opposite.
upvoted 5 times
...
bcx
1 year, 2 months ago
Not trivial to move containers to lambda functions. Not impossible though. They have containers. A serverless way of directly hosting those containers is ECS fargate.
upvoted 1 times
...
chikorita
1 year, 3 months ago
you surely dont have any industry experience or else you wouldn't recommend to run Microservice architecture on LAMBDA functions
upvoted 10 times
...
puffetor
11 months ago
You cannot run container microservices always up on Lambda. Lambda containers needs to be ad-hoc prepared to just run a predefined command that dies in max 15 minutes, so it does not make any sense.
upvoted 5 times
...
...
MAZIADI
Most Recent 2 weeks, 2 days ago
Selected Answer: A
A because ALB & Beanstalk are not serverless & Lambda added support to use docker images directly. you can upload container images to AWS Lambda and use them as functions. AWS Lambda introduced support for deploying functions as container images, allowing you to package and deploy Lambda functions with custom runtimes, libraries, and dependencies that might exceed the limitations of traditional Lambda deployment packages (zip files).
upvoted 1 times
...
ukivanlamlpi
1 month, 3 weeks ago
Selected Answer: A
NOT B and C, because ALB is not serverless architecture NOT D, because Beantalk is not serverless architecture A is also most effective
upvoted 1 times
...
4bc91ae
2 months ago
Selected Answer: A
A - COST EFFECTIVE (lambda is only solution where users pay by invocation)
upvoted 1 times
...
Wuhao
3 months, 2 weeks ago
Selected Answer: B
Once choose eks,do not config load balance manually
upvoted 1 times
...
TonytheTiger
4 months, 4 weeks ago
Selected Answer: B
Option B and NOT Option C: I wasn't able to find a good comparison btw AWS ECS vs AWS EKS pricing in AWS documentation however I found a few articles saying that AWS EKS has additional cost for using EKS control plane. I will leave it up to you to decide. https://www.densify.com/eks-best-practices/aws-ecs-vs-eks/
upvoted 1 times
...
MoT0ne
5 months, 2 weeks ago
AWS Elastic Beanstalk is not considered a serverless architecture. While it abstracts away some of the underlying infrastructure management, it still involves running and managing EC2 instances, which are virtual servers.
upvoted 1 times
...
_Jassybanga_
6 months, 3 weeks ago
D - because BCD are right solution , D because - beanstalk runs ECS in backend + Reduce operation complexity which is asked in the question
upvoted 1 times
...
liux99
7 months, 4 weeks ago
The confusion here is choice between B and C. Both ECS and EKS are container orchestration service which supports fargate. But ECS is aws fully managed, better suited for simple application and also more cost effective.
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: B
Option A - This option might not work. AWS Lambda provides a cheap option to run containers however nothing is said about execution times could be a concern, i.e. AWS Lambda only provides 15 minutes of execution time Option B - This option will work. ALB, ECR, ECS and Fargate in combination will deliver a running solution. Option C - This option will work. ALB, ECR, EKS and Fargate will deliver a running solution. Option D - This option will work: Beanstalk will rely on ECS to run the containers. See https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html Cheapest option is B.
upvoted 3 times
...
ninomfr64
8 months, 3 weeks ago
Selected Answer: B
Not A. as Lambda is not good for running a "traditional web application", also you can use container with Lambda but ECS is "ideal for organizations that want a simple and cost-effective way to deploy and manage containerized applications" Not C. as there is o pointer to EKS (e.g. open-source, industry standard, etc.) and also ECS is "ideal for organizations that want a simple and cost-effective way to deploy and manage containerized applications" Not D. as Beanstalk is not serverless Hence B.
upvoted 2 times
...
severlight
9 months, 2 weeks ago
Selected Answer: B
B. Not D as Beanstalk isn't serverless. Not C because there are no pointers to use EKS. Not A, because microservices are requested.
upvoted 1 times
...
hui521
11 months ago
anyone helps to explain why D is not correct?
upvoted 1 times
Chainshark
10 months, 3 weeks ago
Beanstalk is a PaaS, it isn't truly serverless.
upvoted 1 times
...
...
ansgohar
11 months ago
Selected Answer: B
B. Image on ECR and ECS cost effective over EKS.
upvoted 2 times
...
task_7
11 months, 2 weeks ago
Selected Answer: D
I would go with d a serverless architecture that minimizes operational complexity.
upvoted 2 times
...
cheese929
11 months, 3 weeks ago
Selected Answer: B
B is correct.
upvoted 1 times
...
career360guru
11 months, 3 weeks ago
B is right option. A is possible but Lambda container images has 10GB size limitation and requires you to keep updating these container images as customer re-factors the code. I feel A will have higher operational overhead. B is best option that will be most cost effective and operationally efficient.
upvoted 1 times
...
dimitry_khan_arc
1 year ago
Selected Answer: B
B. Image on ECR and ECS cost effective over EKS.
upvoted 1 times
...
asim_rasheed
1 year ago
Guys please dont put any damn answer which you think, this is community effort and with your answer which does not make any sense( if thrown without logic and reading), it will confuse others and present you like stupid. So contribute if you really want to else move oon without making this forum dirty
upvoted 5 times
jahmad0730
10 months, 1 week ago
You're a dirty dog.
upvoted 2 times
...
...
Shijokingo
1 year ago
B seems right. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html C seems distractor as there is no option as Amazon EKS with Fargate Launch type.
upvoted 2 times
aviathor
12 months ago
You can indeed use FarGate with EKS... https://docs.aws.amazon.com/eks/latest/userguide/fargate.html
upvoted 1 times
...
...
stevegod0
1 year ago
Seems option A provides the most cost-effective solution with minimal operational complexity by leveraging AWS Lambda and API Gateway for the serverless architecture of the microservices.
upvoted 1 times
...
hirenshah005
1 year, 1 month ago
Selected Answer: B
The key points from Q. you can get is solution have to be minimum operation overhead and most cost effective. The amount of steps needed to work with Lambda are high as beyond what they mentioned in A we have to have API Gateway as well to make it work properly. Also Lambda with high concurrency is expensive compared to Fargate. On the other hand B makes it super simple ECS and fargate makes it cheaper.
upvoted 2 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: B
it's B
upvoted 1 times
...
Jonalb
1 year, 2 months ago
Selected Answer: B
Explanation: Amazon ECS with Fargate: By uploading the container images to Amazon ECR and using Amazon ECS with the Fargate launch type, you can run the microservices in containers without having to manage the underlying infrastructure. Fargate automatically scales the containers based on the load. Separate Production and Testing Environments: With two separate auto-scaled Amazon ECS clusters, you can have dedicated environments for production and testing, ensuring isolation and allowing for separate deployments and configurations. Application Load Balancers (ALB): Configuring two separate ALBs allows you to direct traffic to the appropriate ECS clusters. This ensures proper routing of requests between the production and testing environments. Option B provides a cost-effective solution by utilizing the serverless nature of Fargate, which eliminates the need to provision and manage EC2 instances explicitly. It also allows for separate environments, easy scalability, and traffic routing using ALBs, providing flexibility and minimizing operational complexity.
upvoted 2 times
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: B
EKS is more costly than only use fargate then B
upvoted 3 times
...
Jonalb
1 year, 2 months ago
Selected Answer: B
I would vote for B! But the segmentation with namespace in k8s cluster is a reality for economy reasons. Although it is not a good practice.
upvoted 1 times
...
rtguru
1 year, 3 months ago
B seems to be the most cost effective compared to A&C
upvoted 1 times
...
EthicalBond
1 year, 4 months ago
Selected Answer: B
A is great but takes time and too many integrations B is serverless and easy to achieve. C is not serverless D not applicable
upvoted 1 times
...
2aldous
1 year, 4 months ago
A. Before the discussion, check this: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-images.html Also, manage two load balancers are not cost effective.
upvoted 1 times
2aldous
1 year, 3 months ago
Change to "B", because A says "upload image to AWS Lambda" that's actually not possible, you should upload the image to ECR also for Lambda container.
upvoted 2 times
...
...
dev112233xx
1 year, 4 months ago
B makes more sense... A says "upload image to AWS Lambda" that's actually not possible, you should upload the image to ECR also for Lambda container.
upvoted 3 times
...
cuonglc
1 year, 5 months ago
Selected Answer: B b for sure
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: B
ECS + Fargate
upvoted 2 times
...
kiran15789
1 year, 5 months ago
Selected Answer: A
Confused between A and B but after a long think decided to go with A Option A suggests uploading the container images to AWS Lambda as functions and configuring a concurrency limit to handle the expected peak load. This approach allows the company to take advantage of the benefits of serverless computing, such as auto-scaling, without having to manage any infrastructure. In addition, using Lambda integrations within Amazon API Gateway allows the company to direct traffic to the appropriate environment for testing or production.
upvoted 1 times
frfavoreto
1 year, 4 months ago
Not correct. The question already states that the application has to be migrated to a container. 'A' mentions something not feasible (uploading container images to Lambda) and also doesn't meet the requirement to migrate to a containerised architecture. Option 'B' meets all the requirements by offering a serverless way to launch containers in AWS (Fargate instances).
upvoted 1 times
...
...
higashikumi
1 year, 5 months ago
Option B is the most cost-effective solution that meets all the requirements. This solution uploads the container images to Amazon Elastic Container Registry (Amazon ECR) and deploys them using Amazon Elastic Container Service (Amazon ECS) clusters with the Fargate launch type to handle the expected load. Two separate Application Load Balancers are configured to direct traffic to the ECS clusters for production and testing. This solution is cost-effective as it leverages the benefits of serverless architecture with Fargate launch type that removes the need for server management and the cost of running idle servers. Additionally, with auto-scaling, the resources can be dynamically adjusted to handle varying traffic. Furthermore, the use of Application Load Balancers reduces operational complexity and allows for efficient traffic routing.
upvoted 1 times
...
macc183
1 year, 6 months ago
Selected Answer: B
I think the answer is B
upvoted 1 times
...
c73bf38
1 year, 6 months ago
Selected Answer: A
https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/definitions.html The question states the Solutions Architect needs to update the application with a serverless architecture.
upvoted 1 times
...
c73bf38
1 year, 6 months ago
Selected Answer: B
Option B is the most cost-effective solution for the following reasons: The use of Fargate, a serverless compute engine for containers, eliminates the need for managing and scaling the underlying infrastructure. This minimizes operational complexity and reduces costs as the resources are used only when required. Auto scaling ensures that the application scales up and down based on the load, providing the required performance and availability without incurring additional costs. Amazon ECS is a simpler and more cost-effective solution than Amazon EKS, which requires more management and additional resources to operate the Kubernetes control plane. Using Application Load Balancers to direct traffic to the ECS clusters ensures high availability and fault tolerance.
upvoted 3 times
c73bf38
1 year, 6 months ago
Changing to A, B is not serverless and cost-effective.
upvoted 1 times
bcx
1 year, 2 months ago
Fargate is serverless by definition.
upvoted 2 times
...
...
...
Sarutobi
1 year, 6 months ago
Selected Answer: A
Although I would not use this way in production, A is the cheapest. All ECS/EKS needs some LB in front, plus the hourly fee of the cluster.
upvoted 1 times
...
Musk
1 year, 6 months ago
Selected Answer: B
B is cheaper than C, otherwise both would work
upvoted 1 times
...
moota
1 year, 6 months ago
Selected Answer: B
A can be cheaper but it's not performant for a web application. I assume that A does not use provisioned concurrency, so I have to deal with cold starts. If I use provisioned concurrency, I can make B cheaper.
upvoted 1 times
...
sergza
1 year, 6 months ago
Selected Answer: A
A is most Cost effective Does not need ALB and smallest operational overhead
upvoted 2 times
...
NYB
1 year, 7 months ago
it should be ECR + ECS + Fargate, ans: B
upvoted 2 times
...
jeussin
1 year, 7 months ago
Enable EKS+Fargate ??
upvoted 1 times
Untamables
1 year, 7 months ago
Currently available.
upvoted 1 times
leehjworking
1 year, 4 months ago
Can I have the source, please?
upvoted 1 times
...
...
...
skashanali
1 year, 8 months ago
Selected Answer: B
C & D is both valid but when it comes to cost-effective solution, I would go for ECS which does have additional cluster cost for its control plane. https://www.clickittech.com/aws/amazon-ecs-vs-eks/
upvoted 2 times
...
Untamables
1 year, 8 months ago
Selected Answer: B
I Vote B. https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-java-microservices-on-amazon-ecs-using-amazon-ecr-and-aws-fargate.html https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-java-microservices-on-amazon-ecs-using-amazon-ecr-and-load-balancing.html Option C and D also work, but B is the most cost-effective. Option A is wrong. It can launch only APIs and does not mention Web UI. https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-lambda-functions-with-container-images.html Option C is wrong. Amazon EKS costs more than Amazon ECS a bit. https://aws.amazon.com/ecs/pricing/ https://aws.amazon.com/eks/pricing/ Option D is wrong. The Docker environment of AWS Elastic Beanstalk is based on Amazon EC2. That costs more than AWS Fargate. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html
upvoted 6 times
moota
1 year, 6 months ago
It's still technically possible to return html/css with AWS Lambda like what this guy did https://stackoverflow.com/a/59385039/422842
upvoted 1 times
...
...
ptpho
1 year, 8 months ago
I go with B. why EKS while there is no K8s. Just ECS is OK (cost saving and container support)
upvoted 5 times
...
yuyuyuyuyu
1 year, 8 months ago
I think the correct answer is B. C has no notion of tasks.
upvoted 1 times
...
masetromain
1 year, 8 months ago
Selected Answer: C
Answer C makes the most sense https://aws.amazon.com/eks/ https://aws.amazon.com/ecr/
upvoted 3 times
masetromain
1 year, 7 months ago
Option C, using Amazon EKS with Fargate launch type, would be a valid solution for deploying containerized microservices, but it may not be the most cost-effective option. Amazon EKS is a managed Kubernetes service that is more complex to set up and operate than other container orchestration options like Amazon ECS or Elastic Beanstalk. It also generally incurs additional costs for the management of Kubernetes control plane and worker nodes. For a simple use case with a known load and minimal operational complexity, it may not be necessary to use a fully-managed Kubernetes service like EKS and a simpler solution like ECS or Elastic Beanstalk may be more cost-effective.
upvoted 2 times
...
...
Question #8 Topic 1

A company has a multi-tier web application that runs on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The ALB and the Auto Scaling group are replicated in a backup AWS Region. The minimum value and the maximum value for the Auto Scaling group are set to zero. An Amazon RDS Multi-AZ DB instance stores the application’s data. The DB instance has a read replica in the backup Region. The application presents an endpoint to end users by using an Amazon Route 53 record.
The company needs to reduce its RTO to less than 15 minutes by giving the application the ability to automatically fail over to the backup Region. The company does not have a large enough budget for an active-active strategy.
What should a solutions architect recommend to meet these requirements?

  • A. Reconfigure the application’s Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function.
  • B. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Configure Route 53 with a health check that monitors the web application and sends an Amazon Simple Notification Service (Amazon SNS) notification to the Lambda function when the health check status is unhealthy. Update the application’s Route 53 record with a failover policy that routes traffic to the ALB in the backup Region when a health check failure occurs.
  • C. Configure the Auto Scaling group in the backup Region to have the same values as the Auto Scaling group in the primary Region. Reconfigure the application’s Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Remove the read replica. Replace the read replica with a standalone RDS DB instance. Configure Cross-Region Replication between the RDS DB instances by using snapshots and Amazon S3.
  • D. Configure an endpoint in AWS Global Accelerator with the two ALBs as equal weighted targets. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

masetromain
Highly Voted 1 year, 8 months ago
Selected Answer: B
I go with B https://docs.amazonaws.cn/en_us/Route53/latest/DeveloperGuide/welcome-health-checks.html
upvoted 18 times
masetromain
1 year, 7 months ago
B is correct, because it meets the company's requirements for reducing RTO to less than 15 minutes and not having a large budget for an active-active strategy. In this solution, the company creates an AWS Lambda function in the backup region which promotes the read replica and modifies the Auto Scaling group values. Route 53 is configured with a health check that monitors the web application and sends an Amazon SNS notification to the Lambda function when the health check status is unhealthy. The Route 53 record is also updated with a failover policy that routes traffic to the ALB in the backup region when a health check failure occurs. This way, when the primary region goes down, the failover policy triggers and traffic is directed to the backup region, ensuring a quick recovery time.
upvoted 14 times
...
...
Bereket
Most Recent 2 months, 1 week ago
Selected Answer: B
Correct answer B
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: B
B for sure
upvoted 1 times
...
Vaibs099
7 months ago
This explains Lambda promoting backup read replica in other region - https://medium.com/ankercloud-engineering/aws-lambda-promoting-rds-read-replica-on-cross-region-using-aws-lambda-113db758869
upvoted 1 times
...
ftaws
7 months, 2 weeks ago
why we need Lambda Function ? Is it enough a Route 53 failover policy ?
upvoted 1 times
rhinozD
6 months, 1 week ago
What about RDS failover? You need lambda to promote read replica.
upvoted 1 times
...
...
atirado
8 months, 1 week ago
Selected Answer: B
Option A - This option will not work as needed: The client will get errors when the closest region is the application's backup region Option B - This option implements an active-passive strategy as needed: When the health check fails, Route 53 will resolve to the backup region and the Lambda function will ensure the backup region has resources to function Option C - This option implements an active-active strategy Option D - This option will not work as needed: The client will get errors 50% of the time
upvoted 2 times
...
ninomfr64
8 months, 3 weeks ago
Selected Answer: B
The problem is not detecting the right answer, but reading quickly enough trough all the words in the question!
upvoted 1 times
...
jainparag1
9 months, 1 week ago
Selected Answer: B
B satisfies all the requirements.
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: B
Health check is a metric, hence alarms can be executed, and alarms are integrated with SNS, SNS integrated with lambda. This sounds weird, but it will work.
upvoted 1 times
...
ansgohar
11 months ago
Selected Answer: B
B. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Configure Route 53 with a health check that monitors the web application and sends an Amazon Simple Notification Service (Amazon SNS) notification to the Lambda function when the health check status is unhealthy. Update the application’s Route 53 record with a failover policy that routes traffic to the ALB in the backup Region when a health check failure occurs.
upvoted 2 times
...
dimitry_khan_arc
1 year ago
Selected Answer: B
Health check+SNS. This does not need to have active-active which satisfy the rquirement.
upvoted 1 times
...
NikkyDicky
1 year, 2 months ago
it's a B again
upvoted 1 times
...
Parimal1983
1 year, 2 months ago
Selected Answer: B
As company can not afford with active active configuration and with lambda data layer can be promoted to primary
upvoted 1 times
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: B
SNS + Health check
upvoted 2 times
...
mfsec
1 year, 5 months ago
Selected Answer: B
SNS + Health check
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: B
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-types.html
upvoted 1 times
...
higashikumi
1 year, 5 months ago
The best option to meet the requirements and reduce RTO to less than 15 minutes is to choose option B. Option B involves creating an AWS Lambda function in the backup region to promote the read replica and modify the Auto Scaling group values. Additionally, Route 53 can be configured with a health check that monitors the web application and sends an Amazon SNS notification to the Lambda function when the health check status is unhealthy. The application's Route 53 record can be updated with a failover policy that routes traffic to the ALB in the backup Region when a health check failure occurs. This option is cost-effective as it does not require an active-active strategy, and it uses AWS services to minimize the RTO. The Lambda function can be invoked to promote the read replica in the backup region, and the Auto Scaling group values can be updated to launch EC2 instances in the backup region. Furthermore, the Route 53 health check feature can be used to monitor the web application and initiate the failover process.
upvoted 1 times
...
Sarutobi
1 year, 6 months ago
Selected Answer: B
It would be interesting to see if this actually works. SNS is a regional service, in the last outage of the Virginia Region, we lost SNS completely.
upvoted 2 times
frfavoreto
1 year, 4 months ago
The SNS topic is in the backup region, not the primary. If you have an issue with the backup region at the same time there is not much you can do as your entire architecture is affected.
upvoted 2 times
Sarutobi
1 year, 4 months ago
That is a good point, but how do you need to do some health-API integration? How does SNS in one region know about failure in another? What if your application was not a complete regional outage, or only a service in that region failed? I know this is no longer the initial question :) .
upvoted 1 times
frfavoreto
11 months, 1 week ago
First of all, SNS in one region doesn't need to know anything about the other region. In the backup region, SNS receives a message from Route53 that triggers a Lambda Function, this is simple as that. Secondly, you need to implement proper health checks in your frontend web server in order to return a 5xx or 4xx error codes to the probes coming from Route53. If anything is wrong (database, high latency or even the web server itself), Route53 notices the error code/timeout and immediately triggers the failover solution with SNS messaging. Route53 doesn't need to care about what exactly went wrong, just by receiving any unexpected results from the health checks it triggers the failover region.
upvoted 1 times
...
...
...
...
aws0909
1 year, 6 months ago
I will go with option B as it reduces the RTO
upvoted 1 times
...
Yihong
1 year, 6 months ago
Selected Answer: B
A: no health check C: active active D: Equal weight?
upvoted 3 times
...
Untamables
1 year, 8 months ago
Selected Answer: B
I Vote B. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-types.html Option A, C and D are wrong. The latency-based routing and endopoint weights should be used for active/active strategy. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-latency.html https://docs.aws.amazon.com/global-accelerator/latest/dg/about-endpoints-endpoint-weights.html
upvoted 4 times
...
ptpho
1 year, 8 months ago
I go with B 5xx is incorrectly method to cover the case of the main site completely down Its not act-act loading so R53 should not load traffic between 2 ALBs
upvoted 3 times
...
Question #9 Topic 1

A company is hosting a critical application on a single Amazon EC2 instance. The application uses an Amazon ElastiCache for Redis single-node cluster for an in-memory data store. The application uses an Amazon RDS for MariaDB DB instance for a relational database. For the application to function, each piece of the infrastructure must be healthy and must be in an active state.
A solutions architect needs to improve the application's architecture so that the infrastructure can automatically recover from failure with the least possible downtime.
Which combination of steps will meet these requirements? (Choose three.)

  • A. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are part of an Auto Scaling group that has a minimum capacity of two instances.
  • B. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are configured in unlimited mode.
  • C. Modify the DB instance to create a read replica in the same Availability Zone. Promote the read replica to be the primary DB instance in failure scenarios.
  • D. Modify the DB instance to create a Multi-AZ deployment that extends across two Availability Zones.
  • E. Create a replication group for the ElastiCache for Redis cluster. Configure the cluster to use an Auto Scaling group that has a minimum capacity of two instances.
  • F. Create a replication group for the ElastiCache for Redis cluster. Enable Multi-AZ on the cluster.
Reveal Solution Hide Solution

Correct Answer: ADF 🗳️

Community vote distribution
ADF (97%)
3%

masetromain
Highly Voted 1 year, 8 months ago
Selected Answer: ADF
I go with ADF https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
upvoted 16 times
spencer_sharp
1 year, 8 months ago
Why C is wrong?
upvoted 2 times
Karamen
1 year ago
let suppose in case one of used AZ is failed?
upvoted 1 times
...
masetromain
1 year, 7 months ago
Other options like B. and C. does not meet the requirement because the instances are configured in unlimited mode, it will not be possible to ensure that there is always at least one healthy instance to handle traffic if there is a failure.
upvoted 1 times
God_Is_Love
1 year, 6 months ago
Issue with C - Read replica in the same AZ does not sound High availability
upvoted 6 times
...
...
dtha1002
1 year, 3 months ago
in question "can automatically recover from failure with the least possible downtime" C is correct but D is least possible downtime
upvoted 1 times
...
...
masetromain
1 year, 7 months ago
A. Using an Elastic Load Balancer (ELB) to distribute traffic across multiple EC2 instances can help ensure that the application remains available in the event that one of the instances becomes unavailable. By configuring the instances as part of an Auto Scaling group with a minimum capacity of two instances, you can ensure that there is always at least one healthy instance to handle traffic. D. Modifying the DB instance to create a Multi-AZ deployment that extends across two availability zones can help ensure that the database remains available in the event of a failure. In the event of a failure, traffic will automatically be directed to the secondary availability zone, reducing the amount of downtime. F. Creating a replication group for the ElastiCache for Redis cluster and enabling Multi-AZ can help ensure that the in-memory data store remains available in the event of a failure. This will allow traffic to be automatically directed to the secondary availability zone, reducing the amount of downtime.
upvoted 11 times
...
...
Wuhao
Most Recent 3 months, 2 weeks ago
ElastiCache for Redis Auto Scaling is limited to the following: Redis (cluster mode enabled) clusters running Redis engine version 6.0 onwards E is out
upvoted 1 times
...
joshnort
3 months, 2 weeks ago
Selected Answer: ADF
Satisfies the High Availability requirement on the EC2 instance, Amazon RDS for MariaDB DB instance, and ElastiCache for Redis cluster
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: ADF
ADF, as mentioned in the other comments
upvoted 1 times
...
DmitriKonnovNN
6 months, 3 weeks ago
"The infrastructure can automatically recover from failure with the least possible downtime", to me this sounds rather resilient than highly-availible, since it focuses on MITR but not explicitly on up-time.
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: ADF
Option A - Ensures there is always at least a healthy instance responding to requests. Nothing is said about whether the Auto Scaling Group includes multiple AZs (but it must) Option B - No such thing as EC2 Unlimited Mode Option C - Does not provide a place to fail over to Option D - Provides a place to fail over to Option E - Does not provide a place to fail over to Option F - Provides a place to fail over to Choose A, D, F
upvoted 2 times
...
severlight
9 months, 2 weeks ago
Selected Answer: ADF
obvious
upvoted 1 times
...
ansgohar
11 months ago
Selected Answer: ADF
A, D, F
upvoted 1 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: ADF
it's of course ADF
upvoted 1 times
...
Parimal1983
1 year, 2 months ago
Selected Answer: ADF
For high availability, need to spin up instances in another zone with auto scaling and multi AZ options
upvoted 1 times
...
rtguru
1 year, 3 months ago
ADF will meet the described provisions
upvoted 1 times
...
RunkieMax
1 year, 3 months ago
Selected Answer: ADF
Fit the best the question
upvoted 1 times
...
Maja1
1 year, 4 months ago
Selected Answer: ADF
I wasn't sure if E or F was correct until I read this: "This replacement results in some downtime for the cluster, but if Multi-AZ is enabled, the downtime is minimized. The role of primary node will automatically fail over to one of the read replicas. There is no need to create and provision a new primary node, because ElastiCache will handle this transparently. This failover and replica promotion ensure that you can resume writing to the new primary as soon as promotion is complete." https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
upvoted 4 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: ADF
ADF the correct answers ✅
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: ADF
ADF is the best fit.
upvoted 1 times
...
gameoflove
1 year, 5 months ago
Selected Answer: ADF
I believe, This is correct approach https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
upvoted 1 times
...
vherman
1 year, 5 months ago
Selected Answer: ADF
adf correct
upvoted 1 times
...
spd
1 year, 5 months ago
Selected Answer: ADE
Selecting E because - "Multi-AZ is enabled by default on Redis (cluster mode enabled) clusters" as per https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
upvoted 1 times
...
higashikumi
1 year, 5 months ago
Option B is incorrect because unlimited mode is a configuration option for an Auto Scaling group that is used to handle bursty workloads, and it does not provide any additional availability benefits. Option C is incorrect because creating a read replica in the same Availability Zone does not provide any additional availability benefits, and it would not be able to take over in the event of a failure of the primary instance. Option F is incorrect because Multi-AZ is not an option for ElastiCache for Redis clusters.
upvoted 1 times
frfavoreto
1 year, 4 months ago
ElastiCache for Redis does support Multi-AZ: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html Option 'F' is correct.
upvoted 2 times
...
...
higashikumi
1 year, 5 months ago
A, D, E are the correct options to meet the requirements. Option A is correct because an Auto Scaling group with a minimum capacity of two instances and an Elastic Load Balancer distributing traffic across them can provide high availability and automatic recovery from failure. Option D is correct because a Multi-AZ deployment for the RDS instance will ensure that there is a synchronized standby copy of the database in a separate Availability Zone that can be used for automatic failover. Option E is correct because configuring an Auto Scaling group for the ElastiCache for Redis cluster will ensure that there is at least one available node at all times, and automatic recovery can be achieved by launching new instances to replace any failed nodes.
upvoted 1 times
marszalekm
7 months, 1 week ago
There isn't such a thing like "Auto Scaling group for the ElastiCache for Redis", there is a "Replication Group"
upvoted 1 times
...
...
Ajani
1 year, 5 months ago
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoScaling.html
upvoted 1 times
...
gameoflove
1 year, 5 months ago
Selected Answer: ADF
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
upvoted 1 times
...
spd
1 year, 6 months ago
Why F and Not E ? ElastiCache for Redis natively supports automatic Multi-AZ failover.
upvoted 2 times
Ajani
1 year, 5 months ago
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoScaling.html
upvoted 1 times
spd
1 year, 5 months ago
This does not answer why not E
upvoted 1 times
...
...
...
Musk
1 year, 6 months ago
I don't dislike C
upvoted 1 times
...
Untamables
1 year, 8 months ago
Selected Answer: ADF
No doubt, ADF. Option C is wrong. Creating a read replica 'in the same availability zone' makes no sense.
upvoted 2 times
...
aimik
1 year, 8 months ago
ADF all of them high availibility
upvoted 2 times
...
ptpho
1 year, 8 months ago
I go with ADF Hope we have 74 questions like this =))
upvoted 2 times
...
Arun_Bala
1 year, 8 months ago
Selected Answer: ADF
I go for correct answer as : ADF options
upvoted 2 times
...
Question #10 Topic 1

A retail company is operating its ecommerce application on AWS. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses an Amazon RDS DB instance as the database backend. Amazon CloudFront is configured with one origin that points to the ALB. Static content is cached. Amazon Route 53 is used to host all public zones.
After an update of the application, the ALB occasionally returns a 502 status code (Bad Gateway) error. The root cause is malformed HTTP headers that are returned to the ALB. The webpage returns successfully when a solutions architect reloads the webpage immediately after the error occurs.
While the company is working on the problem, the solutions architect needs to provide a custom error page instead of the standard ALB error page to visitors.
Which combination of steps will meet this requirement with the LEAST amount of operational overhead? (Choose two.)

  • A. Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3.
  • B. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Target.FailedHealthChecks is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a publicly accessible web server.
  • C. Modify the existing Amazon Route 53 records by adding health checks. Configure a fallback target if the health check fails. Modify DNS records to point to a publicly accessible webpage.
  • D. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Elb.InternalError is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a public accessible web server.
  • E. Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page.
Reveal Solution Hide Solution

Correct Answer: CE 🗳️

Community vote distribution
AE (94%)
4%

Raj40
Highly Voted 1 year, 8 months ago
A & E https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GeneratingCustomErrorResponses.html#custom-error-pages-procedure
upvoted 33 times
...
atirado
Highly Voted 8 months, 1 week ago
Selected Answer: AE
Option A - This option helps: Allows exposing custom error pages from a highly-available location Option B - This option requires a lot of set up Option C - This option might not work because modifying DNS will redirect all traffic publicly accessible webpage Option D - This option requires a lot of set up Option E - This option helps: Shows a custom error page when the error occurs
upvoted 7 times
...
agatim
Most Recent 1 month, 3 weeks ago
Selected Answer: AC
Option A - Allow us to expose a error page with low effort. Option B - Requires a lot of set up Option C - Allow us to redirect all the traffic to our error page exposed by S3 in case of errors. Option D - requires a lot of set up Option E - Custom Error Pages in CloudFront refers to the same Origin (in our case the Load Balancer) so it does not work with all the other answers. So correct answer are A and C
upvoted 1 times
...
roger8t8
2 months, 1 week ago
A & E https://aws.amazon.com/blogs/aws/custom-error-pages-and-responses-for-amazon-cloudfront/
upvoted 1 times
...
azhar3128
2 months, 3 weeks ago
I think it is wordplay. Option A says to upload "error pages", which will be an overhead for creating a page for each error and unnecessary. that's where C & E are correct
upvoted 3 times
...
iulian0585
3 months ago
Selected Answer: AE
A and E according to AWS documentation: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GeneratingCustomErrorResponses.html#custom-error-pages-procedur
upvoted 1 times
...
kz407
5 months, 2 weeks ago
Selected Answer: AE
The only problem with E is that it say "Modify DNS records to point to a publicly accessible web page" at the end. It doesn't make sense to begin with. And configuring custom error responses in CF has nothing to do with DNS anyway.
upvoted 5 times
...
MoT0ne
5 months, 2 weeks ago
I think why is not A is because of this sentence - "The webpage returns successfully when a solutions architect reloads the webpage immediately after the error occurs." - so let's not think it as requiring a maintenance page
upvoted 1 times
...
abeb
9 months ago
should be AE
upvoted 2 times
...
severlight
9 months, 2 weeks ago
Selected Answer: AE
I haven't found out why should we use C.
upvoted 2 times
...
bur4an
12 months ago
Selected Answer: AE
Agree with Raj40
upvoted 3 times
...
dimitry_khan_arc
1 year ago
Selected Answer: CE
C & E. B & D are incorrect. Managing lambda is overhead. A is incorrect. Static page from S3 need to retrieve with custom code.
upvoted 2 times
jainparag1
9 months, 1 week ago
Do you have any further reference to your explanation of custom code requirement to fetch the error page from S3?
upvoted 1 times
...
_Jassybanga_
6 months, 3 weeks ago
not really , you just need to static url provided by aws when you use the bucket for static webpage and embed it anywhere to reach to the static website
upvoted 1 times
...
...
cattle_rei
1 year, 1 month ago
Selected Answer: AE
AE because it accomplishes the task and is the least complex.
upvoted 4 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: AE
AE is right
upvoted 2 times
...
Parimal1983
1 year, 2 months ago
Selected Answer: AE
Custom error pages need to setup in different location then source (where web pages is hosted), configure CloudFront to use those custom error pages
upvoted 2 times
...
rtguru
1 year, 3 months ago
Correct answer is A&E
upvoted 2 times
...
Sarutobi
1 year, 4 months ago
Selected Answer: AE
We need a combination, so A provides the error page; should we go with DNS health-check (C+A) or CloudFront (E+A)? In my case, I try to stick to a single service to do failover, and DNS is a great option, but it looks like, in this question, CloudFront is already present with the least-operational overhead.
upvoted 5 times
...
mfsec
1 year, 5 months ago
Selected Answer: AE
AE - easy
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: AE
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-types.html
upvoted 1 times
...
higashikumi
1 year, 5 months ago
Explanation: Option A allows the creation of a custom error page that can be hosted on an S3 bucket. Option E provides a way to configure a custom error response for CloudFront, which can point to the S3 bucket hosting the error page. This allows visitors to see a custom error page without modifying any of the application infrastructure.
upvoted 3 times
...
dev112233xx
1 year, 6 months ago
Selected Answer: AE
A&E are the correct answers imo
upvoted 1 times
...
Pratap
1 year, 6 months ago
A and E as per https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GeneratingCustomErrorResponses.html#custom-error-pages-procedure
upvoted 1 times
...
God_Is_Love
1 year, 6 months ago
A is incorrect because, Cloud front already handles OAI and its easy to build up error page with it. DNS records apply is pretty quick, So C,E are correct.
upvoted 3 times
...
vsk12
1 year, 7 months ago
A & C as S3 can be used to host the static website and Route 53 can be configured for health checks and fail-over routing. Refer AWS documentation - Route 53 Fail Over S3 (https://aws.amazon.com/premiumsupport/knowledge-center/fail-over-s3-r53/) Option E is wrong as CloudFront would return the error response for failure and does not provide a page that Route 53 can point to.
upvoted 2 times
MRL110
1 year, 1 month ago
Option E says: "Modify DNS records to point to a publicly accessible web page" which should mean Route53 here I guess.
upvoted 1 times
...
...
masetromain
1 year, 7 months ago
Selected Answer: AE
Option A: Creating an S3 bucket and uploading custom error pages to it will allow you to provide a custom error page to visitors when the ALB returns a 502 error. Option E: By configuring CloudFront custom error pages, visitors will be redirected to a publicly accessible web page when a 502 error occurs. DNS records can be modified to point to a publicly accessible web page, which will be displayed when the error occurs. Option B and D are not a best practice since they would change the behavior of the load balancer and it's not the best way to display custom error pages. Option C is not related to custom error pages and not the best way to handle the problem.
upvoted 3 times
...
excoRt
1 year, 8 months ago
Selected Answer: AE
A & E - Classic Cloudfront error page mechanism
upvoted 2 times
...
Untamables
1 year, 8 months ago
Selected Answer: AE
Option A and E are the most simple way to meet the requirement.
upvoted 3 times
...
ptpho
1 year, 8 months ago
I go with AE since R53 "Evaluate Target Health" works with Alias Records that support health checks, so CLDFR distribution cannot be selected
upvoted 2 times
...
JimmyWong0911
1 year, 8 months ago
Selected Answer: AE
AE SAP-C01 #831
upvoted 3 times
...
spencer_sharp
1 year, 8 months ago
AE SAP-C01 #831
upvoted 2 times
...
robertohyena
1 year, 8 months ago
Answer: A & C C & E never state where is the publicly accessible webpage.
upvoted 4 times
...
masetromain
1 year, 8 months ago
I want to answer AC. Answer A to have a static web page. The C response to have an ALB status check.
upvoted 3 times
masetromain
1 year, 7 months ago
I was wrong the answer is AE https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-professional/view/3/
upvoted 1 times
...
...
Question #11 Topic 1

A company has many AWS accounts and uses AWS Organizations to manage all of them. A solutions architect must implement a solution that the company can use to share a common network across multiple accounts.
The company’s infrastructure team has a dedicated infrastructure account that has a VPC. The infrastructure team must use this account to manage the network. Individual accounts cannot have the ability to manage their own networks. However, individual accounts must be able to create AWS resources within subnets.
Which combination of actions should the solutions architect perform to meet these requirements? (Choose two.)

  • A. Create a transit gateway in the infrastructure account.
  • B. Enable resource sharing from the AWS Organizations management account.
  • C. Create VPCs in each AWS account within the organization in AWS Organizations. Configure the VPCs to share the same CIDR range and subnets as the VPC in the infrastructure account. Peer the VPCs in each individual account with the VPC in the infrastructure account.
  • D. Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each subnet to associate with the resource share.
  • E. Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each prefix list to associate with the resource share.
Reveal Solution Hide Solution

Correct Answer: AD 🗳️

Community vote distribution
BD (86%)
12%

masetromain
Highly Voted 1 year, 8 months ago
Selected Answer: BD
I go with BD
upvoted 25 times
masetromain
1 year, 7 months ago
Step B is needed because it enables the organization to share resources across accounts. Step D is needed because it allows the infrastructure account to share specific subnets with the other accounts in the organization, so that the other accounts can create resources within those subnets without having to manage their own networks.
upvoted 12 times
8693a49
3 weeks, 6 days ago
Note that B says it enables sharing from the management account, but the infrastructure team must use the infrastructure account to manage the network", so there is nothing to share form the management account. Also, options D and E also enable resource sharing (you don't need to enable it from the management account, other accounts can enable resource sharing too). VPCs can't talk to each other by default. You need to do something to 'glue' them together in a larger network.
upvoted 1 times
...
...
...
razguru
Highly Voted 1 year, 8 months ago
A - Doesn't seem correct as the question didnt state multiple VPs, so transit gateway is not relevant. I will go with B & D
upvoted 8 times
8693a49
3 weeks, 6 days ago
There are multiple VPCs because each account must have at least one.
upvoted 1 times
...
...
8693a49
Most Recent 3 weeks, 6 days ago
Selected Answer: AE
Voting A & E
upvoted 1 times
...
8693a49
3 weeks, 6 days ago
It's AD. To form a network between multiple accounts, each with their own VPCs, you can use VPC peering or Transit Gateway. But VPC peering is only suitable for a few acounts, and we have many, so we need to create a TGW (A). Then we need to associate it with the VPCs across all acounts, we do this through RAM, and we need to configure the route tables in all accounts to use the TGW, which is done through prefixes (D). See https://docs.aws.amazon.com/prescriptive-guidance/latest/integrate-third-party-services/architecture-3-1.html The question is a bit weird because the answer could allow accounts to manage the network inside their own VPCs, so probably some SCP policies are needed to prevent this. But the accounts cannot edit the TGW routing, so probably that's what they were trying to suggest.
upvoted 1 times
8693a49
3 weeks, 6 days ago
I meant to say AE, but I can't edit the post now.
upvoted 1 times
...
...
cnethers
2 months, 1 week ago
I would go BD When you share a subnet using AWS Resource Access Manager (RAM) with another AWS account, the resources within that shared subnet can communicate with each other and with the resources in the account that owns the subnet. However, for outbound network connectivity to other VPCs, on-premises networks, or the internet, you need to set up additional networking components.
upvoted 1 times
cnethers
2 months, 1 week ago
2. Inter-VPC Communication: o If the resources in the shared subnet need to communicate with resources in another VPC (either within the same AWS account or in a different AWS account), you can use VPC Peering or a Transit Gateway. o VPC Peering: Establish a peering connection between the VPCs and update the route tables accordingly. o Transit Gateway: Create a Transit Gateway, attach both VPCs to the Transit Gateway, and configure the necessary route tables and Transit Gateway route tables.
upvoted 1 times
...
cnethers
2 months, 1 week ago
Here's a breakdown of different scenarios and the required setup: 1. Internet Access: o If you need resources in the shared subnet to access the internet, ensure that the subnet is a public subnet with an associated Internet Gateway (IGW) and appropriate route table entries. o The account that owns the VPC will typically manage the IGW and the route tables.
upvoted 1 times
...
cnethers
2 months, 1 week ago
3. On-Premises Connectivity: o If the resources in the shared subnet need to communicate with an on-premises network, you can use AWS Direct Connect or a Site-to-Site VPN. o These connections can be routed through a Transit Gateway for more scalable and manageable network architecture.
upvoted 1 times
...
...
[Removed]
2 months, 2 weeks ago
for my understanding it should be AD https://docs.aws.amazon.com/prescriptive-guidance/latest/integrate-third-party-services/architecture-3-1.html Enabling resource sharing from the AWS Organizations management account is not required, as the infrastructure account can create and manage resource shares
upvoted 2 times
...
rapatajones
4 months, 4 weeks ago
Selected Answer: BE
B E correta
upvoted 1 times
...
rapatajones
4 months, 4 weeks ago
BE com certeza
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: BD
BD, as mentioned in other comments
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: BD
Option A - Does not assist with allowing OUs to create resources in the subnets Option B - Allows sharing resources across the entire organization Option C - This option does not work as a way to share subnets because it creates multiple VPCs and subnets in the accounts rather than allowing managing resources in shared subnets Option D - Directly shares the subnets Option E - Does not assist because it only shares pre-built CIDR blocks rather than subnets
upvoted 4 times
8693a49
3 weeks, 6 days ago
Subnets cannot be shared
upvoted 1 times
...
...
shaaam80
8 months, 3 weeks ago
Selected Answer: BD
Answer - B & D. A is wrong. No TGW needed as customer has just 1 VPC. E is wrong - can't share resources via RAM using prefix lists. C is wrong - talks about creating VPCs with same CIDR ranges and VPC peering (not possible with overlapping CIDRs and not needed for this solution as there is just 1 VPC).
upvoted 2 times
...
GibaSP45
8 months, 3 weeks ago
Selected Answer: BE
https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html
upvoted 3 times
...
abeb
9 months ago
BE is good
upvoted 1 times
...
AlbertS82
9 months, 2 weeks ago
Selected Answer: BD
B&D is the only correct answer
upvoted 2 times
...
severlight
9 months, 2 weeks ago
Selected Answer: BD
I don't see the way you can share a prefix list.
upvoted 1 times
8693a49
3 weeks, 6 days ago
You don't share a prefix list, you associate it with the shared resource (which here is a TGW). The way you do it is you add the prefixes to the route tables inside the account's VPCs. The prefixes will point towards the TGW. This makes the network traffic destined to other account go through the TGW into these accounts based on the TGW routing table. The TGW routing table can only be controlled from the infrastructure account.
upvoted 1 times
...
...
senthilsekaran
9 months, 4 weeks ago
B & D correct
upvoted 1 times
...
ansgohar
11 months ago
Selected Answer: BD
I go with B & D
upvoted 1 times
srs27
9 months, 2 weeks ago
Do you really need Management account to share the resources among the accounts? I doubt.
upvoted 2 times
...
...
sreed77
11 months, 2 weeks ago
Selected Answer: BD
Option B allows the infrastructure team to manage the network in the infrastructure account. It also allows individual accounts to create AWS resources within subnets. This is done by creating a resource share in AWS Resource Access Manager (RAM) in the infrastructure account. The resource share is then associated with the specific AWS Organizations OU that will use the shared network. The subnets are then associated with the resource share. Option D is also necessary because it allows the infrastructure team to control who has access to the shared network. This is done by assigning permissions to the resource share. Here are the steps involved in implementing this solution: Create a resource share in RAM in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each subnet to associate with the resource share. Assign permissions to the resource share.
upvoted 4 times
...
dimitry_khan_arc
12 months ago
Selected Answer: BD
B & D are most relevant
upvoted 1 times
...
whenthan
1 year ago
Selected Answer: BD
https://aws.amazon.com/blogs/networking-and-content-delivery/vpc-sharing-a-new-approach-to-multiple-accounts-and-vpc-management/
upvoted 4 times
...
cattle_rei
1 year, 1 month ago
Selected Answer: BD
BD is the most correct, the rest are distractors
upvoted 1 times
...
cattle_rei
1 year, 1 month ago
BD seems the most correct
upvoted 1 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: BD
it's BD
upvoted 1 times
...
Parimal1983
1 year, 2 months ago
Selected Answer: BE
Using prefix list we can simplify routing tables instead of sharing individual subnet of the VPCs. Need to enable resource sharing at organization.
upvoted 4 times
lxrdm
1 year, 1 month ago
When you go into RAM and create a resource share, you can only select a subnet to share.
upvoted 1 times
Brightalw
1 year ago
Prefix lists ec2:PrefixList Create and manage prefix lists centrally, and share them with other AWS accounts or your organization. This lets multiple AWS accounts reference prefix lists in their resources, such as VPC security groups and subnet route tables. For more information, see Working with shared prefix lists in the Amazon VPC User Guide.
upvoted 2 times
...
...
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: BD
The correct answers are D and B. D will allow the infrastructure team to create a resource share in AWS Resource Access Manager in the infrastructure account. This will allow them to share the VPC with the other accounts in the organization. B will enable resource sharing from the AWS Organizations management account. This is required to allow the resource share to be created. C is not necessary, as the resource share will allow the other accounts to create resources in the shared VPC. A is not necessary, as the resource share will allow the other accounts to connect to the shared VPC through the transit gateway. E is not necessary, as the resource share will allow the other accounts to create resources in the shared VPC without the need for prefix lists.
upvoted 1 times
...
Amir70
1 year, 2 months ago
A. By creating a transit gateway in the infrastructure account, you establish a centralized hub for network connectivity. The transit gateway acts as a transit point for traffic between VPCs and accounts. C. Create VPCs in each individual AWS account within the organization and configure them to share the same CIDR range and subnets as the VPC in the infrastructure account. Then, peer the VPCs in each individual account with the VPC in the infrastructure account. This allows resources in the individual accounts to communicate over the shared network managed by the infrastructure team. By following these steps, the infrastructure team can maintain control over the network in the dedicated infrastructure account, while individual accounts can create resources within subnets and utilize the shared network. The transit gateway provides the connectivity between the VPCs in different accounts, enabling seamless communication and resource access.
upvoted 1 times
...
rtguru
1 year, 3 months ago
I go with A&D
upvoted 1 times
...
karma4moksha
1 year, 3 months ago
BD, agreed. A is wrong because if you share the network , there are no multiple networks and hence no gateway needed.
upvoted 1 times
...
Maja1
1 year, 4 months ago
Selected Answer: BD
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html
upvoted 2 times
...
mfsec
1 year, 5 months ago
Selected Answer: BD
BD is correct
upvoted 3 times
...
mKrishna
1 year, 5 months ago
ANS: A & C. Option B is not required because AWS Organizations is already being used to manage the accounts. Resource sharing needs to be enabled, but this can be done by creating a resource share. Option D and E both involve creating a resource share in AWS Resource Access Manager (RAM), but they are not the correct solution for this scenario. Option D is specific to subnets, option E is specific to prefix lists, which are used for IP address ranges. Since VPCs are being used in this scenario, options D and E are not applicable.
upvoted 1 times
newtrojan
1 year, 3 months ago
AWS orgs doesnt allow sharing by default https://docs.aws.amazon.com/ram/latest/userguide/security-disable-sharing-with-orgs.html
upvoted 3 times
...
...
kiran15789
1 year, 5 months ago
wouldnt "Select each prefix list to associate with the resource share." will be use to do then go with selecting each subnet
upvoted 2 times
...
Ajani
1 year, 5 months ago
Q: A solution to share a common network across multiple accounts -A because you need a way to route traffic, its either this or vpc peering(not mentioned) -Dor E Because a you can use RAM to share a subnet or prefixes. I am leaning towards E bcos a prefix will be more efficient. e.g. rather than share a /24 subnet. I will share a /16 prefix.(network summarization)
upvoted 1 times
...
masssa
1 year, 6 months ago
Anwer is BD. https://aws.amazon.com/jp/premiumsupport/knowledge-center/vpc-share-subnet-with-another-account/
upvoted 4 times
masssa
1 year, 6 months ago
https://docs.aws.amazon.com/ja_jp/vpc/latest/userguide/vpc-sharing.html
upvoted 1 times
...
...
zozza2023
1 year, 6 months ago
Selected Answer: BD
B & D seems to be the correct answers
upvoted 1 times
...
skashanali
1 year, 8 months ago
Selected Answer: BD
Ans A doesn't make sense. You also need to enable sharing with AWS Organizations within Resource Access Manager service to share the subnet. https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs
upvoted 1 times
...
Untamables
1 year, 8 months ago
Selected Answer: BD
AWS Resource Access Manager can share subnets with other AWS accounts. https://docs.aws.amazon.com/ram/latest/userguide/shareable.html
upvoted 2 times
...
ptpho
1 year, 8 months ago
I go with AD the company can use to share a common network across multiple accounts -> TGW in infras account Enable resource sharing is an optional to share all 'without having to enumerate each account'
upvoted 4 times
...
robertohyena
1 year, 8 months ago
B & D https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html
upvoted 3 times
...
Question #12 Topic 1

A company wants to use a third-party software-as-a-service (SaaS) application. The third-party SaaS application is consumed through several API calls. The third-party SaaS application also runs on AWS inside a VPC.
The company will consume the third-party SaaS application from inside a VPC. The company has internal security policies that mandate the use of private connectivity that does not traverse the internet. No resources that run in the company VPC are allowed to be accessed from outside the company’s VPC. All permissions must conform to the principles of least privilege.
Which solution meets these requirements?

  • A. Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-party SaaS application provides. Create a security group to limit the access to the endpoint. Associate the security group with the endpoint.
  • B. Create an AWS Site-to-Site VPN connection between the third-party SaaS application and the company VPC. Configure network ACLs to limit access across the VPN tunnels.
  • C. Create a VPC peering connection between the third-party SaaS application and the company VPUpdate route tables by adding the needed routes for the peering connection.
  • D. Create an AWS PrivateLink endpoint service. Ask the third-party SaaS provider to create an interface VPC endpoint for this endpoint service. Grant permissions for the endpoint service to the specific account of the third-party SaaS provider.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (91%)
9%

Raj40
Highly Voted 1 year, 8 months ago
Selected Answer: A
https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-access-saas.html
upvoted 18 times
...
masetromain
Highly Voted 1 year, 8 months ago
Selected Answer: A
I go with A
upvoted 8 times
masetromain
1 year, 7 months ago
A. Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-party SaaS application provides. Create a security group to limit the access to the endpoint. Associate the security group with the endpoint. This solution uses AWS PrivateLink, which creates a secure and private connection between the company's VPC and the third-party SaaS application VPC, without the traffic traversing the internet. The use of a security group and limiting access to the endpoint service conforms to the principle of least privilege.
upvoted 11 times
...
...
gofavad926
Most Recent 5 months, 1 week ago
Selected Answer: A
A, the service provider creates an endpoint service and grants their customers access to the endpoint service. As the service consumer, you create an interface VPC endpoint, which establishes connections between one or more subnets in your VPC and the endpoint service.
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: A
Option A - The interface VPC Endpoint will provide local access to the SaaS service from within the company's VPC. Moreover, traffic to and access from the SaaS VPC will traverse the AWS network rather than the internet. This is considered private traffic. Option B - This option might not work: Nothing is said about whether the CIDR blocks in each VPC overlap. Moreover, nothing is said about whether bandwidth limitations on Site-Site VPN could be an issue. Option C - This option might not work: Nothing is said about whether the CIDR blocks in each VPC overlap. Option D - This option will not work: A PrivateLink Endpoint service is used for facilitating access to AWS services.
upvoted 2 times
...
shaaam80
8 months, 3 weeks ago
Selected Answer: A
Answer - A. VPC Interface end point to access any service privately without traversing the internet. AWS Private Link VPC endpoint to access the SaaS application.
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: A
obvious
upvoted 1 times
...
senthilsekaran
9 months, 4 weeks ago
Correct Answer : A
upvoted 1 times
...
task_7
11 months, 2 weeks ago
Selected Answer: D
A VS D A. Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-party SaaS application provides. D. Create an AWS PrivateLink endpoint service. Ask the third-party SaaS provider to create an interface VPC endpoint for this endpoint service D is right SaaS provider has create interface VPC endpoint for this endpoint service
upvoted 4 times
_Jassybanga_
6 months, 3 weeks ago
exactly , we need to access the resource from SAAS Provider and not vice versa , Hence in this case the VPC Gateway endpoint should be provided from SAAS Provider for the privatelink endpoint we provide it to them - we use this for Snowflake Saas :)
upvoted 1 times
...
...
whenthan
1 year ago
Selected Answer: A
https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-access-saas.html https://aws.amazon.com/blogs/apn/enabling-new-saas-strategies-with-aws-privatelink/
upvoted 1 times
...
cattle_rei
1 year, 1 month ago
Selected Answer: A
It's A because in this scenario we are consuming a service , not providing one, so that eliminates E .
upvoted 1 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: A
it s a
upvoted 1 times
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: A
Create an AWS PrivateLink interface VPC endpoint.
upvoted 1 times
...
2aldous
1 year, 4 months ago
Selected Answer: A
Access Saas products throgh AWS Private Link is the answer.
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: A
Create an AWS PrivateLink interface VPC endpoint.
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: A
https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-access-saas.html
upvoted 1 times
...
ptpho
1 year, 8 months ago
It's A .clearly
upvoted 4 times
...
spencer_sharp
1 year, 8 months ago
Selected Answer: A
https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-access-saas.html
upvoted 4 times
...
robertohyena
1 year, 8 months ago
A is correct. https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html#share-endpoint-service
upvoted 5 times
...
Question #13 Topic 1

A company needs to implement a patching process for its servers. The on-premises servers and Amazon EC2 instances use a variety of tools to perform patching. Management requires a single report showing the patch status of all the servers and instances.
Which set of actions should a solutions architect take to meet these requirements?

  • A. Use AWS Systems Manager to manage patches on the on-premises servers and EC2 instances. Use Systems Manager to generate patch compliance reports.
  • B. Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use Amazon QuickSight integration with OpsWorks to generate patch compliance reports.
  • C. Use an Amazon EventBridge rule to apply patches by scheduling an AWS Systems Manager patch remediation job. Use Amazon Inspector to generate patch compliance reports.
  • D. Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use AWS X-Ray to post the patch status to AWS Systems Manager OpsCenter to generate patch compliance reports.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

masetromain
Highly Voted 1 year, 8 months ago
Selected Answer: A
A is good https://docs.aws.amazon.com/prescriptive-guidance/latest/patch-management-hybrid-cloud/design-on-premises.html
upvoted 14 times
masetromain
1 year, 7 months ago
A is correct. AWS Systems Manager can manage patches on both on-premises servers and EC2 instances and can generate patch compliance reports. AWS OpsWorks and Amazon Inspector are not specifically designed for patch management and therefore would not be the best choice for this use case. Using Amazon EventBridge rule and AWS X-Ray to generate patch compliance reports is not a practical solution as they are not designed for patch management reporting.
upvoted 13 times
...
...
gofavad926
Most Recent 5 months, 1 week ago
Selected Answer: A
A is the correct answer
upvoted 1 times
...
MoT0ne
5 months, 2 weeks ago
Selected Answer: A
AWS OpsWorks is a configuration management service that provides a way to automate the deployment, configuration, and management of applications on EC2 instances. It is designed to help you manage the entire lifecycle of your applications.
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: A
Option A - Systems Manager patches and generates patch compliance reports. Option B - This option does not apply because Chef or Puppet are not mentioned in the question. Moreover, either one does not directly perform patch management. Option C - Inspector would generate a report for on-premise resources Option D - This option does not apply because Chef or Puppet are not mentioned in the question. Moreover, X-Ray does apply.
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: A
obvious
upvoted 1 times
...
whenthan
1 year ago
Selected Answer: A
A is correct
upvoted 1 times
...
stevegod0
1 year ago
A is correct: https://www.amazonaws.cn/en/systems-manager/
upvoted 1 times
...
cattle_rei
1 year, 1 month ago
Selected Answer: A
Other options are distractors. Opswork would be right only if customer wanted to make use of existing script or know-how in chef or puppet.
upvoted 1 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: A
yep - A
upvoted 1 times
...
EricZhang
1 year, 3 months ago
A is the best but Systems Manager cannot generate the patch compliance reports. https://docs.aws.amazon.com/prescriptive-guidance/latest/patch-management-hybrid-cloud/design-on-premises.html - A resource data sync in Systems Manager Inventory gathers the patching details and publishes them to an S3 bucket. - Patch compliance reporting and dashboards are built in Amazon QuickSight from the S3 bucket information.
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: A
A is the right answer for this question as per information shared by them
upvoted 2 times
...
2aldous
1 year, 4 months ago
Selected Answer: A
Easy question :) A is the answer.
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: A
Use AWS Systems Manager to manage patches
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: A
https://docs.aws.amazon.com/prescriptive-guidance/latest/patch-management-hybrid-cloud/design-on-premises.html
upvoted 1 times
...
gameoflove
1 year, 5 months ago
Selected Answer: A
AWS System Manager support On-premise and EC2 instance patching
upvoted 2 times
...
dev112233xx
1 year, 6 months ago
Selected Answer: A
A is correct ofc.. easy one )
upvoted 1 times
...
spencer_sharp
1 year, 8 months ago
Selected Answer: A
AS THE SAME WITH SAP-C01 QUESTION 782
upvoted 2 times
...
Raj40
1 year, 8 months ago
Selected Answer: A
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html
upvoted 3 times
...
zhangyu20000
1 year, 8 months ago
A is correct
upvoted 2 times
...
Question #14 Topic 1

A company is running an application on several Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The load on the application varies throughout the day, and EC2 instances are scaled in and out on a regular basis. Log files from the EC2 instances are copied to a central Amazon S3 bucket every 15 minutes. The security team discovers that log files are missing from some of the terminated EC2 instances.
Which set of actions will ensure that log files are copied to the central S3 bucket from the terminated EC2 instances?

  • A. Create a script to copy log files to Amazon S3, and store the script in a file on the EC2 instance. Create an Auto Scaling lifecycle hook and an Amazon EventBridge rule to detect lifecycle events from the Auto Scaling group. Invoke an AWS Lambda function on the autoscaling:EC2_INSTANCE_TERMINATING transition to send ABANDON to the Auto Scaling group to prevent termination, run the script to copy the log files, and terminate the instance using the AWS SDK.
  • B. Create an AWS Systems Manager document with a script to copy log files to Amazon S3. Create an Auto Scaling lifecycle hook and an Amazon EventBridge rule to detect lifecycle events from the Auto Scaling group. Invoke an AWS Lambda function on the autoscaling:EC2_INSTANCE_TERMINATING transition to call the AWS Systems Manager API SendCommand operation to run the document to copy the log files and send CONTINUE to the Auto Scaling group to terminate the instance.
  • C. Change the log delivery rate to every 5 minutes. Create a script to copy log files to Amazon S3, and add the script to EC2 instance user data. Create an Amazon EventBridge rule to detect EC2 instance termination. Invoke an AWS Lambda function from the EventBridge rule that uses the AWS CLI to run the user-data script to copy the log files and terminate the instance.
  • D. Create an AWS Systems Manager document with a script to copy log files to Amazon S3. Create an Auto Scaling lifecycle hook that publishes a message to an Amazon Simple Notification Service (Amazon SNS) topic. From the SNS notification, call the AWS Systems Manager API SendCommand operation to run the document to copy the log files and send ABANDON to the Auto Scaling group to terminate the instance.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: B
B. Create an AWS Systems Manager document with a script to copy log files to Amazon S3. Create an Auto Scaling lifecycle hook and an Amazon EventBridge rule to detect lifecycle events from the Auto Scaling group. Invoke an AWS Lambda function on the autoscaling:EC2_INSTANCE_TERMINATING transition to call the AWS Systems Manager API SendCommand operation to run the document to copy the log files and send CONTINUE to the Auto Scaling group to terminate the instance. This approach will use the Auto Scaling lifecycle hook to execute the script that copies log files to S3, before the instance is terminated, ensuring that all log files are copied from the terminated instances.
upvoted 12 times
...
rtgfdv3
Highly Voted 1 year, 8 months ago
Selected Answer: B
https://aws.amazon.com/blogs/infrastructure-and-automation/run-code-before-terminating-an-ec2-auto-scaling-instance/
upvoted 7 times
...
gofavad926
Most Recent 5 months, 1 week ago
Selected Answer: B
B is the correct answer
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: B
Option A - This option might not work: Preventing ASG termination could create further trouble and there is no guarantee the script will run if the instance happens to be unhealthy Option B - This option could work: Running the script from the SSM API guarantees the script will run, using EventBridge to capture the ASG termination event provides a perfect place to hook in the call to SSM which will also pause the termination until the script runs. Then CONTINUE allows the ASG termination to continue. Option C - This option does not work because it does not solve the problem: Terminating instances within the 15 minute window causes log files to be lost. Option D - This option might not work: It does not rely on EventBridge to detect the ASG termination event. It also could create further trouble because no other actions will be performed due to sending ABANDON though nothing is said about other actions in the question
upvoted 5 times
...
severlight
9 months, 2 weeks ago
Selected Answer: B
both abandon and continue will lead to instance termination, the difference is abandon will prevent from running other lifycycle hooks
upvoted 1 times
...
ansgohar
11 months ago
Selected Answer: B
B. Create an AWS Systems Manager document with a script to copy log files to Amazon S3. Create an Auto Scaling lifecycle hook and an Amazon EventBridge rule to detect lifecycle events from the Auto Scaling group. Invoke an AWS Lambda function on the autoscaling:EC2_INSTANCE_TERMINATING transition to call the AWS Systems Manager API SendCommand operation to run the document to copy the log files and send CONTINUE to the Auto Scaling group to terminate the instance.
upvoted 1 times
...
cattle_rei
12 months ago
Selected Answer: B
I think this is B. It could be A as well, but B is better solution because the document with SM can be re-utilized with other instances. Also A would require using a custom image with the script or user data to create the script, so more points of failure.
upvoted 1 times
...
cattle_rei
12 months ago
I think this is B. It could be A as well, but B is better solution because the document with SM can be re-utilized with other instances. Also A would require using a custom image with the script or user data to create the script, so more points of failure.
upvoted 1 times
...
softarts
1 year ago
Selected Answer: B
d is wrong, shouldn't be "ABANDON"
upvoted 2 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: B
it's a B
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: B
B is the right answer due to Auto Scaling lifecycle hook and an Amazon EventBridge rule to detect lifecycle events from the Auto Scaling group. Invoke an AWS Lambda function on the autoscaling:EC2_INSTANCE_TERMINATING transition to call the AWS Systems Manager API SendCommand operation to run the document to copy the log files and send
upvoted 1 times
...
F_Eldin
1 year, 3 months ago
Selected Answer: B
A- Wrong because prevent termination is not needed. C- Wrong because 5-minute frequency creates an overhead or delay . Using user data for the script adds complexity D- Wrong because SNS
upvoted 2 times
...
2aldous
1 year, 4 months ago
Selected Answer: B
B. Smart solution :)
upvoted 3 times
...
mfsec
1 year, 5 months ago
Selected Answer: B
Systems manager + eventbridge
upvoted 3 times
...
kiran15789
1 year, 5 months ago
Selected Answer: B
https://aws.amazon.com/blogs/infrastructure-and-automation/run-code-before-terminating-an-ec2-auto-scaling-instance/
upvoted 2 times
...
Untamables
1 year, 8 months ago
Selected Answer: B
B https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html
upvoted 4 times
...
masetromain
1 year, 8 months ago
I find answer C correct. but can at the same time that an instance is terminated run a lambda function that executes the script?
upvoted 1 times
masetromain
1 year, 8 months ago
I'm wrong the answer is B https://www.examtopics.com/discussions/amazon/view/69532-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 2 times
...
...
zhangyu20000
1 year, 8 months ago
B is correct https://docs.aws.amazon.com/autoscaling/ec2/userguide/tutorial-lifecycle-hook-lambda.html
upvoted 2 times
...
Raj40
1 year, 8 months ago
Selected Answer: B
Correct answer B
upvoted 4 times
...
Question #15 Topic 1

A company is using multiple AWS accounts. The DNS records are stored in a private hosted zone for Amazon Route 53 in Account A. The company’s applications and databases are running in Account B.
A solutions architect will deploy a two-tier application in a new VPC. To simplify the configuration, the db.example.com CNAME record set for the Amazon RDS endpoint was created in a private hosted zone for Amazon Route 53.
During deployment, the application failed to start. Troubleshooting revealed that db.example.com is not resolvable on the Amazon EC2 instance. The solutions architect confirmed that the record set was created correctly in Route 53.
Which combination of steps should the solutions architect take to resolve this issue? (Choose two.)

  • A. Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instance’s private IP in the private hosted zone.
  • B. Use SSH to connect to the application tier EC2 instance. Add an RDS endpoint IP address to the /etc/resolv.conf file.
  • C. Create an authorization to associate the private hosted zone in Account A with the new VPC in Account B.
  • D. Create a private hosted zone for the example com domain in Account B. Configure Route 53 replication between AWS accounts.
  • E. Associate a new VPC in Account B with a hosted zone in Account A. Delete the association authorization in Account A.
Reveal Solution Hide Solution

Correct Answer: BC 🗳️

Community vote distribution
CE (100%)

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: CE
C and E are correct. C. Create an authorization to associate the private hosted zone in Account A with the new VPC in Account B. This step is necessary because the VPC in Account B needs to be associated with the private hosted zone in Account A to be able to resolve the DNS records. E. Associate a new VPC in Account B with a hosted zone in Account A. Delete the association authorization in Account A. This step is necessary because the association authorization needs to be removed in Account A after the association is done in Account B.
upvoted 33 times
...
kiran15789
Highly Voted 1 year, 5 months ago
Selected Answer: CE
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-associate-vpcs-different-accounts.html
upvoted 9 times
...
7f6aef3
Most Recent 4 months, 2 weeks ago
Selected Answer: CE
https://repost.aws/knowledge-center/route53-private-hosted-zone
upvoted 1 times
...
8608f25
6 months, 2 weeks ago
Selected Answer: CE
Correct answers
upvoted 1 times
8608f25
6 months, 2 weeks ago
Explanation: * Option C is correct because, in a multi-account AWS setup, to use a Route 53 private hosted zone from one account (Account A) in another account’s VPC (Account B), you first need to create an authorization. This authorization is necessary for allowing the private hosted zone in one account to be associated with a VPC in another account. This step enables the resolution of DNS records stored in the private hosted zone across accounts. * Option E is correct as it follows up on the authorization created in Option C. Once the authorization is in place, you can then associate the new VPC in Account B with the private hosted zone in Account A. This association is what actually allows the EC2 instances within the VPC in Account B to resolve DNS queries using the private hosted zone in Account A, ensuring that db.example.com can be resolved as intended.
upvoted 4 times
8608f25
6 months, 2 weeks ago
Why the others are incorrect: * Option A is not a direct solution to the problem of DNS resolution across AWS accounts. Deploying the database on an EC2 instance does not address the issue of DNS resolution for the RDS endpoint across accounts. * Option B is not a scalable or AWS-recommended solution. Manually adding RDS endpoint IP addresses to the /etc/resolv.conf file on an EC2 instance is not practical for environments that require automation and could lead to issues if the RDS endpoint changes. * Option D involves creating a separate private hosted zone in Account B and configuring Route 53 replication between AWS accounts. This option is unnecessary and more complex than required. The direct association of VPCs across accounts to a single hosted zone is a simpler and more effective solution. Therefore, Options C and E are the steps that directly address the issue with the least complexity and enable the intended DNS resolution across AWS accounts.
upvoted 3 times
...
...
...
atirado
8 months, 1 week ago
Selected Answer: CE
Option A - This option does not work - It does not provide for solving address name resolution in the new VPC Option B - This option works but it breaks the company’s architecture where all DNS names are stored in the private zone in Account A Option C - This option contributes to the solution. Option D - Breaks the company’s architecture Option E - This option contributes to the solution
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: CE
obvious
upvoted 1 times
...
SfQ
10 months, 2 weeks ago
Selected Answer: CE
C and E are correct. B is not a best solution. It's a manual setup and it may lose the configuration if we are using ASG and launching new instance.
upvoted 1 times
...
Chainshark
10 months, 3 weeks ago
Why is B marked as correct?
upvoted 2 times
SfQ
10 months, 2 weeks ago
B is not a best solution. It's a manual setup and it may lose the configuration if we are using ASG and launching new instance.
upvoted 2 times
...
...
whenthan
1 year ago
Selected Answer: CE
https://repost.aws/knowledge-center/route53-private-hosted-zone Create an authorization to associate the private hosted zone and as a best practice , it is recommended to delete the association authorization in account A-This step prevents you from recreating the same association later. To delete the authorization, reconnect to the EC2 instance in Account A
upvoted 2 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: CE
it's CE
upvoted 1 times
...
Jonalb
1 year, 2 months ago
Selected Answer: CE
ccccccccccccceeeeeeeeeeeeee
upvoted 1 times
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: CE
C & E as Issue is associated with authorization
upvoted 1 times
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: CE
C & E as Issue is associated with authorization
upvoted 1 times
...
AWS_Sam
1 year, 3 months ago
C + E are correct
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: CE
C & E as Issue is associated with authorization
upvoted 1 times
...
Maria2023
1 year, 4 months ago
Selected Answer: CE
C and E are correct
upvoted 2 times
...
mfsec
1 year, 5 months ago
Selected Answer: CE
CE seme like the best choice
upvoted 2 times
...
mKrishna
1 year, 5 months ago
ANS: A & C B is incorrect because modifying the /etc/resolv.conf file on the EC2 instance would not resolve the issue since the issue is with the Route 53 configuration.
upvoted 1 times
...
Musk
1 year, 6 months ago
Selected Answer: CE
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-associate-vpcs-different-accounts.html
upvoted 4 times
...
CloudFloater
1 year, 6 months ago
Selected Answer: CE
C and E. In order to resolve the issue, the solutions architect should create an authorization to associate the private hosted zone in Account A with the new VPC in Account B (Option C). This will allow the new VPC in Account B to access the DNS records stored in the private hosted zone in Account A. In addition, the solutions architect should associate the new VPC in Account B with the hosted zone in Account A (Option E) and delete the association authorization in Account A. This will ensure that the new VPC in Account B is properly configured to use the private hosted zone in Account A and resolve the db.example.com CNAME record set correctly.
upvoted 4 times
...
razguru
1 year, 8 months ago
C & E are correct options.
upvoted 1 times
...
masetromain
1 year, 8 months ago
Selected Answer: CE
With comments and links the answer is C and E. (Ty robertohyène and JosuéXu) C = 6. Run the following command to create the association between Account A's private hosted zone and Account B's VPC. Use the hosted zone's ID from step 3. B account. E = 7. It is recommended to remove the association permission after the association is created. This will prevent you from recreating the same association later. https://aws.amazon.com/premiumsupport/knowledge-center/route53-private-hosted-zone/
upvoted 4 times
masetromain
1 year, 8 months ago
https://www.examtopics.com/discussions/amazon/view/36113-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 1 times
...
...
Raj40
1 year, 8 months ago
Selected Answer: CE
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-associate-vpcs-different-accounts.html
upvoted 4 times
...
JoshuaXu
1 year, 8 months ago
https://aws.amazon.com/premiumsupport/knowledge-center/route53-private-hosted-zone/
upvoted 1 times
...
robertohyena
1 year, 8 months ago
Correct answers: C & E
upvoted 2 times
...
Question #16 Topic 1

A company used Amazon EC2 instances to deploy a web fleet to host a blog site. The EC2 instances are behind an Application Load Balancer (ALB) and are configured in an Auto Scaling group. The web application stores all blog content on an Amazon EFS volume.
The company recently added a feature for bloggers to add video to their posts, attracting 10 times the previous user traffic. At peak times of day, users report buffering and timeout issues while attempting to reach the site or watch videos.
Which is the MOST cost-efficient and scalable deployment that will resolve the issues for users?

  • A. Reconfigure Amazon EFS to enable maximum I/O.
  • B. Update the blog site to use instance store volumes for storage. Copy the site contents to the volumes at launch and to Amazon S3 at shutdown.
  • C. Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3.
  • D. Set up an Amazon CloudFront distribution for all site contents, and point the distribution at the ALB.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (90%)
10%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: C
C. Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3. Amazon CloudFront is a content delivery network (CDN) that can be used to deliver content to users with low latency and high data transfer speeds. By configuring a CloudFront distribution for the blog site and pointing it at an S3 bucket, the videos can be cached at edge locations closer to users, reducing buffering and timeout issues. Additionally, S3 is designed for scalable storage and can handle high levels of user traffic. Migrating the videos from EFS to S3, would also improve the performance and scalability of the website.
upvoted 25 times
...
spencer_sharp
Highly Voted 1 year, 8 months ago
Selected Answer: C
No brainer
upvoted 9 times
...
Bereket
Most Recent 2 months, 1 week ago
Selected Answer: D
The most cost-efficient and scalable deployment that will resolve the issues for users, given the requirements and the described scenario, is: D. Set up an Amazon CloudFront distribution for all site contents, and point the distribution at the ALB.
upvoted 2 times
...
Christophe_
6 months, 2 weeks ago
Selected Answer: D
Option C - Does not support new content added later by users, does not accelerate site content Option D - Accelerate site and videos, allow content added
upvoted 2 times
e4bc18e
5 months, 3 weeks ago
Cloudfront caches data to serve more rapidly at the edge and not have to serve content from the backend, that is acceleration. Also you can now write to S3 for new data. Sorry your choice is not correct.
upvoted 2 times
...
...
atirado
8 months, 1 week ago
Selected Answer: C
Option A - This option might not work and is not cheap: It will increase costs and has limited scalability. EFS is an expensive storage solution for videos Option B - This option might not work: Nothing is mentioned about whether the application is stateful or stateless and whether the ALB has client stickiness so using instance store could provide an inconsistent user experience. S3 is a cheap storage option Option C - This option will work and is cheap: A CloudFront distribution and S3 will provide the most scalability and availability possible from AWS; and both are very cheap options for distribution and storage of content Option D - This option might work but is not cheap: Moving all content to CloudFront ensures it will be served from the edge cache for the duration of the cache mitigating issues during high usage. However, nothing is said in the question about usage patterns, i.e performance issue will happen again for older content. Moreover, EFS is an expensive storage solution for video files compared to S3.
upvoted 1 times
...
ninomfr64
8 months, 2 weeks ago
Selected Answer: C
Not A as Max I/O increase IOPS but negatively impact latency, ultimately you will have little to no performance improvement. Also you cannot enable Max IO on an existing filesystem. Not B as this is not a cheap option (instance store generally cost more than EBS backed), also without a CDN there will be little performance improvement Not D as this provides performance improvements, but this provide comparable performance to option C at higher costs as in D videos are stored on EFS that cost more than S3 and all traffic goes trough CDN rather than only videos that actually needs eddge caching Thus C provide performance improvements (thanks for CloudFront) with cost-effective approach (S3 is cheap)
upvoted 1 times
ninomfr64
8 months, 2 weeks ago
Also this follows AWS best practices to separate static content from dynamic content allowing for better scalability
upvoted 1 times
...
...
geekos
8 months, 4 weeks ago
Selected Answer: C
C is good
upvoted 1 times
...
abeb
9 months ago
C is good
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: C
obvious
upvoted 1 times
...
cattle_rei
12 months ago
Selected Answer: C
No doubt it's C. To me the keyword there is scalable. S3 will be able to handle any amount of content users can generate. EFS is not the right solution for object storage, s3 is. EFS is a solution for a sharable network filesystem, that can be mounted and used by many operation systems.
upvoted 1 times
...
Magoose
1 year, 1 month ago
Selected Answer: D
C and D are both viable. But D would be less overhead as you would most likely need to reconfigure the web application more to get it working with S3. Option D with Elastic Beanstalk provides a higher level of abstraction and automates many aspects of the application management, which can reduce operational overhead and simplify the re-architecting process
upvoted 2 times
totopopo
1 year, 1 month ago
D is not cost effective, which was the demand for the question. If it was about less changes, I would go with it. Here, right answer is C.
upvoted 1 times
...
...
NikkyDicky
1 year, 2 months ago
C more cost efficient
upvoted 1 times
...
karim_arous
1 year, 2 months ago
Selected Answer: C
C without a doubt
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: C
C is only option which meet their requirement
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: C
Configure an Amazon CloudFront distribution.
upvoted 2 times
...
kiran15789
1 year, 5 months ago
Selected Answer: C
y configuring a CloudFront distribution for the blog site and pointing it at an S3 bucket, the videos can be cached at edge locations closer to users, reducing buffering and timeout issues.
upvoted 2 times
...
dev112233xx
1 year, 6 months ago
Selected Answer: C
C ofc.. i hope i will get such easy question in the real exam
upvoted 3 times
...
zozza2023
1 year, 6 months ago
Selected Answer: C
C is the correct
upvoted 2 times
...
komorebi
1 year, 8 months ago
CCCCCCCCCCCCCCCC
upvoted 3 times
...
zhangyu20000
1 year, 8 months ago
C is correct. Do works but not as cheaper as C
upvoted 3 times
God_Is_Love
1 year, 6 months ago
Agree that C is correct, why do you think D is not cheaper ?
upvoted 3 times
bcx
1 year, 2 months ago
Price per GB-month is cheaper in S3
upvoted 2 times
...
...
...
masetromain
1 year, 8 months ago
Selected Answer: C
answer C makes sense
upvoted 4 times
masetromain
1 year, 8 months ago
https://www.examtopics.com/discussions/amazon/view/6008-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 1 times
...
...
Question #17 Topic 1

A company with global offices has a single 1 Gbps AWS Direct Connect connection to a single AWS Region. The company’s on-premises network uses the connection to communicate with the company’s resources in the AWS Cloud. The connection has a single private virtual interface that connects to a single VPC.
A solutions architect must implement a solution that adds a redundant Direct Connect connection in the same Region. The solution also must provide connectivity to other Regions through the same pair of Direct Connect connections as the company expands into other Regions.
Which solution meets these requirements?

  • A. Provision a Direct Connect gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the Direct Connect gateway. Connect the Direct Connect gateway to the single VPC.
  • B. Keep the existing private virtual interface. Create the second Direct Connect connection. Create a new private virtual interface on the new connection, and connect the new private virtual interface to the single VPC.
  • C. Keep the existing private virtual interface. Create the second Direct Connect connection. Create a new public virtual interface on the new connection, and connect the new public virtual interface to the single VPC.
  • D. Provision a transit gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the transit gateway. Associate the transit gateway with the single VPC.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: A
A. Provision a Direct Connect gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the Direct Connect gateway. Connect the Direct Connect gateway to the single VPC. This solution provides a redundant Direct Connect connection in the same Region by creating a new private virtual interface on each connection, and connecting both private virtual interfaces to a Direct Connect gateway. The Direct Connect gateway is then connected to the single VPC. This solution also allows the company to expand into other Regions while providing connectivity through the same pair of Direct Connect connections. The Direct Connect Gateway allows you to connect multiple VPCs and on-premises networks in different accounts and different regions to a single Direct Connect connection. It also provides automatic failover and routing capabilities.
upvoted 22 times
masetromain
1 year, 7 months ago
Option D is not the best solution because it uses a Transit Gateway, which is used to connect multiple VPCs and on-premises networks in different accounts and different regions, but it is not necessary in this scenario. The company only wants to add a redundant Direct Connect connection in the same Region and connect it to the same VPC. Additionally, using a Transit Gateway in this scenario would add more complexity and might not be necessary. Also, Transit Gateway does not provide automatic failover and routing capabilities, which is required in this scenario. The Direct Connect Gateway is a better choice in this scenario as it provides the necessary functionality of automatic failover and routing capabilities, and it is more suitable for connecting multiple Direct Connect connections to a single VPC.
upvoted 12 times
Sarutobi
1 year, 6 months ago
All options here are problematic. The DX-GW is a control plane-only device; in other words, no actual traffic goes over it; it is just a Route-Reflector it only carries the routing table. TGW is not a region construct, so by itself, it cannot provide regional redundancy. In any case, all things considered, maybe A is the closest but it should mention VGW.
upvoted 2 times
Sarutobi
1 year, 6 months ago
I meant to say, "TGW is a region construct".
upvoted 1 times
...
...
...
anita_student
1 year, 6 months ago
Option D is not possible at all. You connect to TGW using transit VIF, not private VIF
upvoted 8 times
AMohanty
11 months, 2 weeks ago
Transit GW - connects both over Private VIF and Transit VIF
upvoted 1 times
...
...
...
zozza2023
Highly Voted 1 year, 6 months ago
Selected Answer: A
A is the correct solution and the best
upvoted 5 times
...
kz407
Most Recent 5 months, 1 week ago
What I don't understand is why do you need to delete the existing private VIF? Can't that be reassigned?
upvoted 2 times
...
MoT0ne
5 months, 2 weeks ago
Private Virtual Interface is a logical connection between your Direct Connect connection and a Direct Connect gateway. It is a virtual representation of the physical connection and allows you to establish connectivity to the VPCs associated with the Direct Connect gateway.
upvoted 1 times
...
KyleZheng
8 months ago
A Because “Transit GW can also communicate from on-premises to AWS, but this one uses Site to Site VPN (IPSec VPN).“
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: A
Option A - This option might work however it is missing a step: Connecting the Direct Connect Gateway to a Virtual Private Gateway in the single VPC (and any VPC in a new region) Option B - This option will not work: It does not allow to grow into new regions and it does not create a redundant link Option C - This option will not work: Using a Public Virtual interface does not connect VPC resources to on-premise Option D - This option might work however it missing multiple steps: Each VPC will require its own Transit Gateway. Each Transit Gateway will connect through an association with Direct Connect gateway. Each Direct Connect connection will connect to the Direct Connect Gateway using a Transit VIF
upvoted 2 times
...
ninomfr64
8 months, 2 weeks ago
Selected Answer: A
I have to admit that initially I picked a wrong answer, here is my findings after some docs browsing: Not B as this will provide Direct Connect (DX) redundancy but does not provide connectivity to other Regions Not C as this will not even provide DX redundancy for the VPC because the public VIF on the new connection does not provide access to the VPC Not D as Transit Gateway (TGW) is a regional resources and does not allows to provide connectivity to other Regions (you can peer with a TGW in another Region). Also you need to have a Transit virtual interface to connect a DX to a TGW or you need to have DXGW to connect a VIF to a TGW. A is correct as a DXGW is a global resources that allows cross-region attachments
upvoted 3 times
...
shaaam80
8 months, 3 weeks ago
Selected Answer: A
Answer A. DCGW is the only option here as it supports both DC connections plus allows expansion into other regions. TGW does not span regions.
upvoted 3 times
...
severlight
9 months, 2 weeks ago
Selected Answer: A
multiple regions - dx gateway
upvoted 1 times
...
AMohanty
11 months, 2 weeks ago
None of the options seem to satisfy the condition "Solution must provide connectivity to other regions through same pair of Direct Connect Connections. In both option A and D, we don't talk of associating second region VPC to the Transit GW or Direct Connect GW.
upvoted 1 times
...
whenthan
1 year ago
Selected Answer: A
https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-vpc-access/
upvoted 1 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: A
It's A. D is not suported
upvoted 1 times
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: A
A keyword === Direct Connect gateway
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: A
A. Is the Correct Option as Direct Connect Gateway with Private Virtual Interface will meet the requirement
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: A
Provision a Direct Connect gateway.
upvoted 2 times
...
God_Is_Love
1 year, 6 months ago
Logical answer : B and C are good for existing architecture in question. But with redundant DX connection requirement, only solution is Gateway. that resolves to A(Direct connect gateway) or D(Transit gateway), but D as transit gateway is wrong because it mentions private interfaces connecting with transit gateway which is weird [usually VPC attachments are made connecting transit gateway]. So answer is A - Direct Connect Gateway. (Infact, this is future proof when we want different VPCs in different regions later with this architecture)
upvoted 3 times
...
Untamables
1 year, 8 months ago
Selected Answer: A
A https://docs.aws.amazon.com/whitepapers/latest/hybrid-connectivity/aws-dx-dxgw-with-vgw-multi-regions-and-aws-public-peering.html
upvoted 3 times
...
spencer_sharp
1 year, 8 months ago
Selected Answer: A
transit gateway does not support cross-region
upvoted 4 times
Mahakali
1 year, 6 months ago
https://aws.amazon.com/about-aws/whats-new/2019/12/aws-transit-gateway-supports-inter-region-peering/ But Still answer is A
upvoted 1 times
...
...
zhangyu20000
1 year, 8 months ago
A is correct because direct connect gateway support multi region
upvoted 2 times
...
masetromain
1 year, 8 months ago
Selected Answer: A
I go with A https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html https://jayendrapatil.com/aws-direct-connect-gateway/
upvoted 2 times
masetromain
1 year, 8 months ago
https://www.examtopics.com/discussions/amazon/view/69343-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 1 times
...
...
Question #18 Topic 1

A company has a web application that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by custom recognition software for categorization.
The website contains static content that has variable traffic with peaks in certain months. The architecture consists of Amazon EC2 instances running in an Auto Scaling group for the web application and EC2 instances running in an Auto Scaling group to process an Amazon SQS queue. The company wants to re-architect the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-party software.
Which solution meets these requirements?

  • A. Use Amazon ECS containers for the web application and Spot instances for the Auto Scaling group that processes the SQS queue. Replace the custom software with Amazon Rekognition to categorize the videos.
  • B. Store the uploaded videos in Amazon EFS and mount the file system to the EC2 instances for the web application. Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
  • C. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notification to publish events to the SQS queue. Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
  • D. Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the web application and launch a worker environment to process the SQS queue. Replace the custom software with Amazon Rekognition to categorize the videos.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (87%)
13%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: C
This solution meets the requirements by using multiple managed services offered by AWS which can reduce the operational overhead. Hosting the web application in Amazon S3 would make it highly available, scalable and can handle variable traffic. The uploaded videos can be stored in S3 and processed using S3 event notifications that trigger a Lambda function, which calls the Amazon Rekognition API to categorize the videos. SQS can be used to process the event notifications and also it is a managed service. This solution eliminates the need to manage EC2 instances, EBS volumes and the custom software. Additionally, using Lambda function in this case, eliminates the need for managing additional servers to process the SQS queue which will reduce operational overhead. By using this solution, the company can benefit from the scalability, reliability, and cost-effectiveness that these services offer, which can help to reduce operational overhead and improve the overall performance and security of the application.
upvoted 28 times
Mahakali
11 months ago
Any explanation on option A ?
upvoted 1 times
...
...
RaghavendraPrakash
Highly Voted 1 year, 4 months ago
D. Because, you cannot host web application in S3, only static web assets. ElasticBeanStalk provides an easy way to onboard autoscaling web apps with minimal operational overheads.
upvoted 12 times
7f6aef3
4 months, 2 weeks ago
Rekognition no consulta directamente EBS, pero puedes cargar datos en un recurso de almacenamiento compatible con Rekognition, como S3, para que Rekognition realice análisis sobre esos datos.
upvoted 1 times
...
7f6aef3
4 months, 2 weeks ago
Rekognition does not query EBS directly, but you can upload data to a Rekognition-compatible storage resource, such as S3, for Rekognition to perform analysis on that data.
upvoted 1 times
...
gofavad926
5 months, 1 week ago
"The company wants to re-architect the application "...
upvoted 1 times
...
Arnaud92
12 months ago
But it is specifically specified that the web app is just static content...
upvoted 2 times
Bloops
11 months, 3 weeks ago
"The website contains static content" Contains do not means that all the website is just static
upvoted 1 times
Six_Fingered_Jose
11 months, 3 weeks ago
They also do not mention the website has any dynamic content so there's that
upvoted 9 times
...
...
...
jpa8300
8 months ago
D is right and it is valid, but C seems to me a more complete and better solution. And I agree that the site seems to be only static content. Usually, when it has dynamic content it is mentioned int he question.
upvoted 1 times
...
...
ff32d79
Most Recent 2 weeks, 5 days ago
I saw this question in other question bank (owner of the questions) and it is A, reason is assuming is moving files back and forth cannot be static page, so it is A.
upvoted 1 times
...
Helpnosense
2 months, 1 week ago
Selected Answer: C
Only Answer C is the solution that covers all the requirements, where the videos are stored, how SQS messages are produced and consumed, how web app is hosted.
upvoted 1 times
...
Bereket
2 months, 1 week ago
Selected Answer: C
C. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notification to publish events to the SQS queue. Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos. Explanation: Hosting the Web Application in Amazon S3: Cost-effective and Scalable: Amazon S3 is a cost-effective and scalable solution for hosting static web content. It can handle variable traffic efficiently without the need to manage servers. Static Content Hosting: Ideal for serving static content like HTML, CSS, JavaScript, and media files.
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: C
C, this is a typical scenario
upvoted 1 times
...
kz407
5 months, 1 week ago
Selected Answer: C
While I vote for C, I do think however that whether we can go with C really depends on the application codebase. The use case mentions that the application enables file uploads. We know that handling files require a backend, if your application is written in something like Java. If that's the case, you won't be able to host your application in S3. The phrase "website contains static content" is really vague, as it does not reveal anything about the backend of the application. Now, the fact that the application has EBS to store Video files give up a hint, that suggests that the application has some BE code. I am taking a hint from "re-architect" I assume involves some revamping of the applications codebase. So, here's how I'd go about "re-architecting" 1. Move storage of files to S3. 2. Eliminate the BE codebase, revamp the FE codebase to rely entirely on AWS JS SDK and handle file uploads with that. Now you don't need to manage any compute resources at all. 3. Go about the rest of the solution.
upvoted 1 times
...
MoT0ne
5 months, 2 weeks ago
re-architect the application to reduce operational overhead
upvoted 1 times
...
grire974
7 months, 2 weeks ago
Selected Answer: C
If it were D - how would Rekognition access the videos to classify? Rekognition would need to ssh into the EBS volume of various beanstalk instances running under an ASG (impossible as far as I know). I agree though - I think the wording is terrible for 'contains static content'; as how on earth would this type of app practically run on s3 alone for login/ user auth etc.. would need to be coupled with other serverless products such as lambda/cognito etc.
upvoted 1 times
grire974
7 months, 2 weeks ago
per my previous comment; s3 is the only viable data source for rekognition https://aws.amazon.com/rekognition/faqs/#:~:text=Amazon%20Rekognition%20Video%20operations%20can,are%20MPEG%2D4%20and%20MOV. from my experience this is the same too with similar services like elastic transcoder
upvoted 1 times
...
...
924641e
8 months, 2 weeks ago
Selected Answer: C
The mention of static content really throws this question off and clearly the community thinks this as well. The argument of static website vs static content being the key to selecting D isn't really a strong argument but that doesn't exclude D from being a viable solution. Operational overhead is minimized with Elastic Beanstalk and removes dependencies on third party tools/software.
upvoted 2 times
24Gel
5 months, 2 weeks ago
thanks, this is the best explain
upvoted 1 times
...
...
subbupro
8 months, 3 weeks ago
Elastic bean stack is not required , it is a static content only, better can go with S3. So Answer is C
upvoted 1 times
...
abeb
9 months ago
C videos in Amazon S3
upvoted 1 times
...
KevinYao
9 months ago
Selected Answer: D
Web application is never hosted in S3, that is storage normally.
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: C
C is a well-explained and detailed solution. For D it isn't like that, for instance, there is no solution provided for storing images.
upvoted 1 times
...
M4D3V1L
10 months, 4 weeks ago
It's A, I had the same question in Jon Bonzo's tests and the right answer is A.
upvoted 2 times
...
alexua
11 months, 1 week ago
I go with D. "web site has static content" it's not the same be static web site. And web site on S3 does not go with https, so upload the video without Authentication & SSL/TLS !!???
upvoted 1 times
...
Simon523
11 months, 3 weeks ago
Selected Answer: C
The case is similar to the blogs below, and seem normally Amazon Rekognition is trigger by AWS Lambda function. https://aws.amazon.com/tw/blogs/architecture/detecting-solar-panel-damage-with-amazon-rekognition-custom-labels/
upvoted 1 times
...
whenthan
1 year ago
Selected Answer: C
While AWS Elastic Beanstalk can simplify deployment, it might not fully meet the requirement of removing dependencies on third-party software, as it still requires using Amazon Rekognition. This option introduces additional complexity by maintaining a separate worker environment for SQS queue processing.
upvoted 2 times
...
chico2023
1 year ago
Answer D. It says: "The website contains static content...", not "It's a static website". Still, even if you argue that it's possible to host a web application in S3 with a combination of S3 + Lambda + ..., you would fall into increasing the operational overhead with so many moving parts. AWS Elastic Beanstalk is a platform as a service used for deploying and scaling web applications and services and, although it won't make everything serverless (they are not asking for that), it will make management and deployment easier while still using AWS Managed Services. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts-worker.html
upvoted 4 times
Arnaud92
12 months ago
Why would they specify that the web app contains static content if not 100% static content ? It wouldn't make sense here. You have to assume that it is a static website.
upvoted 3 times
8608f25
7 months, 2 weeks ago
It can't be a static website because users are able to upload contents to it. It is a dynamic website. The scenario mentions static content because that is part of the overall solutions.
upvoted 1 times
...
...
...
Russs99
1 year, 1 month ago
Selected Answer: C
The main concern with option D is that it still relies on managing EC2 instances for both the web application and the worker environment, which might not be the most cost-effective and operationally efficient solution compared to the serverless architecture in option C.
upvoted 2 times
...
giancarlooooo
1 year, 1 month ago
Selected Answer: D
The answer is D because the question says "re-architect" so you don't want to intervene on the software, but only on the management. If the question said "re-factoring" then it would have been C
upvoted 3 times
chiajy
1 year ago
I support answer D but re-arc & re-fac mean the same thing. [Ref: https://aws.amazon.com/blogs/enterprise-strategy/6-strategies-for-migrating-applications-to-the-cloud/]
upvoted 1 times
...
...
Mom305
1 year, 1 month ago
Selected Answer: C
Lambda to cover the serverless approach, S3 is way better than EFS, and SQS proces the events from S3
upvoted 2 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: C
C most fitting
upvoted 1 times
...
Jonalb
1 year, 2 months ago
Selected Answer: C
Static content! guyssssss
upvoted 2 times
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: C
C is more serverless solutions
upvoted 1 times
...
easytoo
1 year, 2 months ago
c-c-c-c-c-c-c-c-c-c
upvoted 1 times
...
muurilopes
1 year, 2 months ago
Selected Answer: D
The application needs a backend to process video uploads
upvoted 1 times
...
dev112233xx
1 year, 3 months ago
Selected Answer: D
How is it possible to host a website in S3??. the website has a STATIC "content" but website itself is NOT STATIC
upvoted 6 times
Arnaud92
12 months ago
why would they mention that the website has just some static content ? it make no sense here.
upvoted 1 times
...
BATSIE
1 year, 2 months ago
Yes, you can host videos on Amazon S3. Amazon S3 is an object storage service that can store and retrieve any amount of data, including videos, images, and other media files. While Amazon S3 can be used to host static websites, it is not limited to just that use case. You can use Amazon S3 to store and serve any type of file, including videos. You can also use Amazon S3 in combination with other AWS services such as Amazon CloudFront to deliver video content to users with low latency and high transfer speed
upvoted 1 times
...
...
nexus2020
1 year, 4 months ago
Selected Answer: C
This solution eliminates the need for managing and scaling EC2 instances for the web application and the worker environment for processing the SQS queue.
upvoted 7 times
...
mfsec
1 year, 5 months ago
Selected Answer: C
Host the web application in Amazon S3
upvoted 3 times
...
mKrishna
1 year, 5 months ago
ANS is D Point to consider, "reduce operational overhead using AWS managed services" and not to redesign. Therefore, EC2 will be replaced with ElasticBeans
upvoted 3 times
AlbertS82
1 year, 4 months ago
No. The key point here is: The company wants to RE-ARCHITECT the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-party software. Read the question carefully.
upvoted 1 times
...
...
kiran15789
1 year, 5 months ago
Selected Answer: C
This solution eliminates the need for managing and scaling EC2 instances for the web application and the worker environment for processing the SQS queue.
upvoted 3 times
...
cudbyanc
1 year, 6 months ago
Selected Answer: C
The answer is C. This solution eliminates the need for managing and scaling EC2 instances for the web application and the worker environment for processing the SQS queue. Instead, Amazon S3 can host the web application, and store the uploaded videos, which can trigger S3 event notifications to send messages to the SQS queue. Then, an AWS Lambda function can process the messages in the SQS queue and use Amazon Rekognition API to categorize the videos. This approach also takes advantage of AWS-managed services, such as S3, SQS, and Lambda, which reduces operational overhead and dependency on third-party software.
upvoted 4 times
...
PSPaul
1 year, 6 months ago
Vote C
upvoted 2 times
...
God_Is_Love
1 year, 6 months ago
Logical answer : Key here is reduced operational head and use aws managed services which takes to serverless solutions. which is Lambda and Rekognition (aws managed). Mounting to EFS is overhead and moreover is good for file system, in future can pose problem scaling it with large video content in future. S3 is good for static videos storage obviously, So C is correct.
upvoted 1 times
...
Musk
1 year, 7 months ago
I don't like C. It says that it CONTAINS static content, but it does not say ONLY static content. The S3 would not be suitable.
upvoted 2 times
c73bf38
1 year, 6 months ago
The most appropriate solution would be to use Amazon S3 for storing the uploaded videos, and hosting the web application. This approach reduces operational overhead, and removes dependencies on third-party software. S3 event notifications can be used to publish events to an SQS queue, which can then be processed using AWS Lambda functions that call the Amazon Rekognition API to categorize the videos.
upvoted 2 times
...
...
Untamables
1 year, 8 months ago
Selected Answer: C
I vote C. The serverless architecture reduces operational overhead the most for the requirement. https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront.html https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html https://docs.aws.amazon.com/rekognition/latest/dg/video-analyzing-with-sqs.html
upvoted 4 times
...
spencer_sharp
1 year, 8 months ago
Selected Answer: C
no brainer
upvoted 3 times
...
masetromain
1 year, 8 months ago
Selected Answer: C
website contains static content = S3 I go with C
upvoted 5 times
masetromain
1 year, 8 months ago
https://www.examtopics.com/discussions/amazon/view/35889-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 1 times
...
...
zhangyu20000
1 year, 8 months ago
Correct answer is C
upvoted 2 times
...
Question #19 Topic 1

A company has a serverless application comprised of Amazon CloudFront, Amazon API Gateway, and AWS Lambda functions. The current deployment process of the application code is to create a new version number of the Lambda function and run an AWS CLI script to update. If the new function version has errors, another CLI script reverts by deploying the previous working version of the function. The company would like to decrease the time to deploy new versions of the application logic provided by the Lambda functions, and also reduce the time to detect and revert when errors are identified.
How can this be accomplished?

  • A. Create and deploy nested AWS CloudFormation stacks with the parent stack consisting of the AWS CloudFront distribution and API Gateway, and the child stack containing the Lambda function. For changes to Lambda, create an AWS CloudFormation change set and deploy; if errors are triggered, revert the AWS CloudFormation change set to the previous version.
  • B. Use AWS SAM and built-in AWS CodeDeploy to deploy the new Lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback if Amazon CloudWatch alarms are triggered.
  • C. Refactor the AWS CLI scripts into a single script that deploys the new Lambda version. When deployment is completed, the script tests execute. If errors are detected, revert to the previous Lambda version.
  • D. Create and deploy an AWS CloudFormation stack that consists of a new API Gateway endpoint that references the new Lambda version. Change the CloudFront origin to the new API Gateway endpoint, monitor errors and if detected, change the AWS CloudFront origin to the previous API Gateway endpoint.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: B
AWS Serverless Application Model (SAM) is a framework that helps you build, test and deploy your serverless applications. It uses CloudFormation under the hood, so it is a way to simplify the process of creating, updating, and deploying CloudFormation templates. CodeDeploy is a service that automates code deployments to any instance, including on-premises instances and Lambda functions. With AWS SAM you can use the built-in CodeDeploy to deploy new versions of the Lambda function, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. You can also define CloudWatch Alarms to trigger a rollback in case of any issues. This allows for a faster and more efficient deployment process, as well as a more reliable rollback process when errors are identified. This way you can increase the speed of deployment and reduce the time to detect and revert when errors are identified.
upvoted 28 times
...
AwsZora
Most Recent 2 months, 1 week ago
why not a?
upvoted 2 times
...
gofavad926
5 months, 1 week ago
Selected Answer: B
B, use SAM to deploy serverless applications on aws
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: B
Option A - This work will allow reverting to previous versions of the Lambda functions but reverting means all functions will be reverted. This does not minimize the the time needed to detect and revert errors. Option B - This option minimizes the time needed to deploy functions and detect and revert errors: As each function is deployed it can be tested and reverted individually. Moreover, the option provides a straightforward mechanism to detect and revert errors: Detect errors in CloudWatch, fix the functions' code in SAM, redeploy with AWS CodeDeploy. Option C - This option does not minimize the time needed to detect and revert errors. It only automates the current process. Option D - This option does not minimize the time needed to detect and revert errors: It takes time for CloudFormation to switch origins and nothing has been done to about the current process for deploying and testing functions.
upvoted 1 times
...
shaaam80
8 months, 3 weeks ago
Selected Answer: B
Answer B. Use SAM and Codedeploy. Revert if any errors to the previous version.
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: B
obvious
upvoted 1 times
...
whenthan
1 year ago
Selected Answer: B
requirmeents : decrease the time to deploy new versions of the application logic provided by the Lambda functions, revert when erros identified
upvoted 1 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: B
B no do0ubt
upvoted 1 times
...
Jonalb
1 year, 2 months ago
Selected Answer: B
100% B
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: B
B solve the problem which is causing in the current scenario
upvoted 1 times
...
2aldous
1 year, 4 months ago
Selected Answer: B
Definitile B https://docs.aws.amazon.com/es_es/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: B
Use AWS SAM and built-in AWS CodeDeploy
upvoted 1 times
...
5up3rm4n
1 year, 5 months ago
Selected Answer: B
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html AWS Serverless Application Model (AWS SAM) comes built-in with CodeDeploy to provide gradual AWS Lambda deployments. With just a few lines of configuration, AWS SAM does the following for you: Deploys new versions of your Lambda function, and automatically creates aliases that point to the new version. Gradually shifts customer traffic to the new version until you're satisfied that it's working as expected. If an update doesn't work correctly, you can roll back the changes. Defines pre-traffic and post-traffic test functions to verify that the newly deployed code is configured correctly and that your application operates as expected. Automatically rolls back the deployment if CloudWatch alarms are triggered.
upvoted 2 times
...
kiran15789
1 year, 5 months ago
Selected Answer: B
AWS Serverless Application Model (SAM)
upvoted 1 times
...
spencer_sharp
1 year, 8 months ago
Selected Answer: B
sam typical use case
upvoted 3 times
...
masetromain
1 year, 8 months ago
Selected Answer: B
AWS CodeDeploy is intended for this kind of use https://aws.amazon.com/fr/codedeploy/
upvoted 2 times
masetromain
1 year, 8 months ago
https://www.examtopics.com/discussions/amazon/view/5158-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 1 times
...
...
Question #20 Topic 1

A company is planning to store a large number of archived documents and make the documents available to employees through the corporate intranet. Employees will access the system by connecting through a client VPN service that is attached to a VPC. The data must not be accessible to the public.
The documents that the company is storing are copies of data that is held on physical media elsewhere. The number of requests will be low. Availability and speed of retrieval are not concerns of the company.
Which solution will meet these requirements at the LOWEST cost?

  • A. Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint. Configure the S3 bucket to allow access only through that endpoint.
  • B. Launch an Amazon EC2 instance that runs a web server. Attach an Amazon Elastic File System (Amazon EFS) file system to store the archived data in the EFS One Zone-Infrequent Access (EFS One Zone-IA) storage class Configure the instance security groups to allow access only from private networks.
  • C. Launch an Amazon EC2 instance that runs a web server Attach an Amazon Elastic Block Store (Amazon EBS) volume to store the archived data. Use the Cold HDD (sc1) volume type. Configure the instance security groups to allow access only from private networks.
  • D. Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 Glacier Deep Archive storage class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint. Configure the S3 bucket to allow access only through that endpoint.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (67%)
D (32%)
1%

tman22
Highly Voted 1 year, 8 months ago
A - Glacier Deep Archive can't be used for web hosting, regardless if the company says retrieval time is no concern.
upvoted 36 times
tman22
1 year, 8 months ago
Nevermind, I go for D. It should be technically possible - and mostly dependent on the intranet web application logic - It could present users with the ability to start file retrieval, for then to later access the data.
upvoted 16 times
...
...
zhangyu20000
Highly Voted 1 year, 8 months ago
A is correct. HA is not required here. D use Glacier deep archive that need hours to access that will cause time out for web
upvoted 21 times
...
MAZIADI
Most Recent 2 weeks, 1 day ago
Selected Answer: A
A because : No web static hosting on Glacier deep archive + Glacier is the cheapest but can be more expensive than One zone-IA if the employees retrieve the document (retrieval costs are high) + timeout on the front-end because it will take hours to retrieve the file.
upvoted 1 times
...
5ehjry6sktukliyliuliykutjhy
2 months, 1 week ago
I went with D so did ChatGPT yet the majority of folks have chosen A ...How do we know what is the exact answer...I see why One Zone IA should be used but I am not confident, please help
upvoted 2 times
8693a49
3 weeks, 6 days ago
You cannot have a website on Glacier, so D is clearly wrong. To retrieve documents from Glacier you need to first call 'restore' on it. The object becomes available after considerable amount of time in standard storage class a for a limited duration. This wouldn't work on a static website. I suppose technically you could build a non-static application to manage restoring files for users, but it's awkward, and the solution would likely cost more due to development costs. Glacier's purpose is to store data that you never want to see again, but there is a 0.001% chance you might actually need it at some point. It is comparable to tape storage. ChatGPT cannot answer these questions accurately because it is unable to reason.
upvoted 3 times
...
...
cnethers
2 months, 1 week ago
The cheapest storage tier in Amazon S3 that can be used for static web hosting is the Amazon S3 Standard - Infrequent Access (S3 Standard-IA). While it offers lower costs compared to the S3 Standard storage class, it is designed for data that is accessed less frequently but still requires rapid access when needed.
upvoted 1 times
cnethers
2 months, 1 week ago
Here's a quick comparison of the relevant S3 storage classes: Amazon S3 Standard: Designed for frequently accessed data. Low latency and high throughput performance. Suitable for websites with dynamic content and frequent access.
upvoted 1 times
...
cnethers
2 months, 1 week ago
Amazon S3 Standard-IA (Infrequent Access): Lower storage cost than S3 Standard. Suitable for infrequently accessed data that still requires rapid access when needed. There is a retrieval fee per GB when accessing the data.
upvoted 1 times
...
cnethers
2 months, 1 week ago
Amazon S3 One Zone-IA: Even lower cost than S3 Standard-IA. Stores data in a single availability zone, which makes it less resilient than S3 Standard-IA. Suitable for infrequently accessed data that does not require high availability.
upvoted 1 times
...
cnethers
2 months, 1 week ago
Amazon S3 Glacier and S3 Glacier Deep Archive: Much cheaper storage options. Designed for long-term archival and infrequently accessed data. Retrieval times range from minutes (S3 Glacier) to hours (S3 Glacier Deep Archive). Not suitable for static web hosting due to high latency in data retrieval.
upvoted 1 times
...
...
Helpnosense
2 months, 1 week ago
Selected Answer: A
The answer is A. D is wrong because the data stored on Glacier Deep Archive can't be accessed directly without initiating a retrieval request to restore the data to either S3 Standard or S3 Standard-IA first. Needless to say, use it as static web site.
upvoted 1 times
...
gfhbox0083
2 months, 2 weeks ago
A for sure. S3 One Zone-IA is ideal for customers who want a lower-cost option for infrequently accessed data but do not require the availability and resilience of S3.
upvoted 1 times
...
lighthouse85
2 months, 3 weeks ago
Selected Answer: A
Cannot Glacier
upvoted 1 times
...
titi_r
3 months, 3 weeks ago
How can “A” or “D” be correct even though interface endpoint (PrivateLink) for S3 does NOT support Website endpoints!? https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#privatelink-limitations
upvoted 1 times
...
TonytheTiger
4 months, 3 weeks ago
Selected Answer: D
Option D: 2 major points for the company. 1. Availability and speed of retrieval are NOT concerns of the company. 2. Meets these requirements at the LOWEST cost. Only S3 Glacier Deep Archive gives the company those requirement. The questions doesn't state how fast the employees need to access the files but the company does, see point 1. S3 Glacier Deep Archive is the lowest-cost storage option in AWS. Standard-IA and S3 One Zone-IA objects are available for millisecond access https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html
upvoted 1 times
...
Smart
4 months, 4 weeks ago
Selected Answer: A
Objects in Glacier Deep Archive needs to be 'restored'. A click on simple static website will not make AWS API call to restore the object and make it available.
upvoted 2 times
...
red_panda
5 months, 1 week ago
Selected Answer: A
For me is A. I'm not sure that S3 Glacier Deep Archive can be used as website. Also more than 12 hours to retrieve is so much for documental systems (also if is not a concern the speed up). Going with A
upvoted 1 times
...
MoT0ne
5 months, 2 weeks ago
Selected Answer: D
LOWEST cost when compared with A
upvoted 1 times
...
a54b16f
6 months, 2 weeks ago
Selected Answer: A
pay attention to "copies of data that is held on physical media elsewhere", this is hint for one zone. Using Glacier is possible in theory, but won't work out of box. Need to develop a whole new application to submit unarchive request when user request a file, wait for up to 48 hour, create the s3 link, notify the user and ask user to come back to view the file. This is ANOTHER application
upvoted 7 times
24Gel
5 months, 2 weeks ago
I agree, "copies of data that is held on physical media elsewhere", this is hint for one zone, However, it could be multiple zone as well. Availability and speed of retrieval are not concerns of the company. So I go with D
upvoted 1 times
...
kz407
5 months, 1 week ago
This! It's also worth mentioning that, the application we have to develop for option D, will be very difficult, if not impossible to be hosted in S3, because it will be a stateful application.
upvoted 1 times
...
...
gustori99
6 months, 3 weeks ago
B and C does not make sense. BUT A and D also contains nonsense information. It is not possible to configure a bucket to use S3 One Zone-IA storage class or Glacier storage class as default. You can only specify a different storage class during upload. Also configuring bucket for website hosting does not make sense because a website endpoint is only accessible from the public internet (if the bucket policy allows it) and it is not supported for interface endpoint.
upvoted 2 times
...
gustori99
6 months, 3 weeks ago
B and C does not make sense. BUT A and B also contains nonsense information. It is not possible to configure a bucket to use S3 One Zone-IA storage class or Glacier storage class as default. Standard storage class is always default and cannot be changed. You can only specify a different storage class during upload. Also lifecycle policy cannot help because it allows transition to S3 One-Zone-IA only after 30 days. Configuring bucket for website hosting does not make sense because a website endpoint is only accessible from the public internet (if the bucket policy allows it) and it is not supported for interface endpoint.
upvoted 2 times
...
liux99
7 months, 3 weeks ago
Confusion here is A and D. D is cheaper but is not viable. You cannot use S3 bucket of Deep Glacier class for web hosting.
upvoted 2 times
24Gel
5 months, 2 weeks ago
this should not be a concern here. You cannot create a deep archive bucket, when you create a bucket, you either create a normal bucket or a single zone bucket, then you can configure it to use deep archive in it.
upvoted 2 times
...
...
Jay_2pt0_1
8 months ago
I think I'll go for A when I take the exam, but, like most people, I'm on the fence.
upvoted 2 times
...
atirado
8 months, 1 week ago
Selected Answer: A
Option A - This option will work and S3 One Zone is a cheap storage solution for a large number of documents Option B - This option might not work: Nothing is said in the question about whether the Client VPN is connecting to a private subnet. Moreover, EFS might not be a cheap storage solution for a large number of documents Option C - This option might not work: Nothing is said in the question about whether the Client VPN is connecting to a private subnet. Moreover, EBS Cold HDD might not be a cheap storage solution for a large number of documents Option D - This option will not work: S3 Deep Glacier Vaults cannot be configured for static hosting. You would need to write an application for accessing the archives.
upvoted 2 times
...
ninomfr64
8 months, 1 week ago
Selected Answer: D
In the exam I would go for D, but both A and D have an issue: you do not need to enable static website hosting on the bucket, as this is only for public website endpoints. However, having static website hosting enabled doesn't prevent you from access the bucket using API. See https://aws.amazon.com/blogs/networking-and-content-delivery/hosting-internal-https-static-websites-with-alb-s3-and-privatelink/#:~:text=You%20do%20not%20need%20to%20enable%20static%20website%20hosting%20on%20the%20bucket%2C%20as%20this%20is%20only%20for%20public%20website%20endpoints.%20Requests%20to%20the%20bucket%20will%20be%20going%20through%20a%20private%20REST%20API%20instead.
upvoted 2 times
...
924641e
8 months, 2 weeks ago
Tricky but answer D would provide the LOWEST cost vs answer A. Answer A would be the best design balance between cost and use for end-users.
upvoted 1 times
...
ixdb
8 months, 2 weeks ago
Selected Answer: D
S3 bucket does not support to set a default storage class. You can create lifecycle rule with Day 0 to move to Glactier deep archive class and enable web hosting. You can do it on aws console.
upvoted 1 times
...
[Removed]
8 months, 3 weeks ago
Selected Answer: A
D is out because "To retrieve data stored in S3 Glacier Deep Archive, initiate a “Restore” request using the Amazon S3 APIs or the Amazon S3 Management Console."
upvoted 2 times
...
shaaam80
8 months, 3 weeks ago
Selected Answer: A
Answer A. B and C are not relevant. D is close to create confusion but can't be used as an option for 2 reasons: 1. You can't create a S3 bucket with Glacier deep archive as a default storage class. Need lifecycle transition from any other S3 classes. 2. S3 Glacier deep archive can't be used for website hosting.
upvoted 1 times
_Jassybanga_
6 months, 3 weeks ago
1 You can create - read here https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html 2> Yes you are correct on this - So will go with answer A too..
upvoted 1 times
...
...
geekos
8 months, 4 weeks ago
Selected Answer: D
If the company needs to access these documents occasionally and can tolerate several hours of retrieval time, Option D (Glacier Deep Archive) would be the most cost-effective solution. However, if the company requires faster access to these documents (even if infrequently), Option A (S3 One Zone-IA) would be a better choice, balancing cost with the need for more immediate access. Since the original statement indicates that "availability and speed of retrieval are not concerns," Option D (Glacier Deep Archive) aligns more closely with these stipulations, offering the lowest cost solution at the expense of longer retrieval times. However, if the retrieval time becomes a concern at any point, switching to S3 One Zone-IA (Option A) could provide a middle ground between cost and accessibility.
upvoted 1 times
...
Hit1979
8 months, 4 weeks ago
Selected Answer: A
S3 Glacier Deep Archive storage is primarily intended for data archiving purposes. However, it's important to note that in many organizations, only backup administrators have access to retrieve data, and it's not typically designed for direct user access. Additionally, using Deep Archive for web hosting is not feasible due to its intended use case, which focuses on long-term data retention rather than immediate or user-initiated access
upvoted 1 times
...
edder
9 months ago
Selected Answer: D
The answer is D. B,C: No need to use EC2. A: There is no need to use IA as it states that availability and speed of retrieval are not concerns of the company. D: If necessary, you can restore the object to an S3 bucket and retrieve it with web hosting. https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html
upvoted 1 times
...
jainparag1
9 months ago
Selected Answer: A
B and C are distractors. Between A and D, A satisfies all the requirements and since data is available physically somewhere else and so it can be stored in One-zone IA storage class. D would have been the best option cost wise if the bucket is not used for website hosting. Since Glacier class can't be used for website hosting it eliminates option D. Right answer is A.
upvoted 1 times
...
epartida
9 months, 1 week ago
Selected Answer: D
D because the Consulting times is Low, that means the files will not be checked frequently.
upvoted 1 times
...
srs27
9 months, 2 weeks ago
In such divide in correct answer, what should be picked and remembered for exam?
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: A
I would answer A instead of D for the reason that some copies are already stored somewhere and we don't need durability across AZs, and there is retrieval cost of the archive. However, this makes me doubt 'The speed of retrieval are not concern'.
upvoted 1 times
...
senthilsekaran
9 months, 4 weeks ago
Will go with Option D as it has clearly mentioned company want to move archived documents
upvoted 3 times
...
Pupu86
9 months, 4 weeks ago
D: as Glacier deep archival is the lowest cost A: one-zone IA only cost 20% cheaper. both can work with retrieval of data without concern on speed but D will be the cheapest no matter what.
upvoted 1 times
...
rlf
10 months, 3 weeks ago
A. if it were D, retrieval cost from web would become higher than A. And also in Glacier we call vault not bucket.
upvoted 2 times
jainparag1
9 months ago
Retrieval is very rare and time is not a concern. I'll go for D. One zone IA is not for storing archived documents.
upvoted 1 times
...
...
career360guru
11 months, 3 weeks ago
Option A is the only cost effective solution. Deep Archive can't be used for Web-Hosting. Anyone who thinks that is possible should try it once before selecting that option.
upvoted 3 times
ninomfr64
8 months, 1 week ago
You can create an S3 bucket and configure a lifecycle policy to move any files after 0 days to Glacier Deep Archive, then you enable static web site hosting (I just did it) Here the point is that static web site hosting is only for public endpoint, thus in this scenario going for A or D you would only access documents using S3 api.
upvoted 2 times
...
...
bur4an
12 months ago
Selected Answer: A
Given the requirements and the need for the lowest cost solution, the best option would be: A. Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint. Configure the S3 bucket to allow access only through that endpoint. Options B and C involve launching EC2 instances which would add unnecessary complexity and cost since the company's priority is to minimize costs. Additionally, option D involves using the S3 Glacier Deep Archive storage class which is intended for long-term archival data and has longer retrieval times, making it less suitable for the given requirements.
upvoted 3 times
...
dkcloudguru
1 year ago
Option D: This option involves creating an Amazon S3 bucket and configuring it to use the S3 Glacier Deep Archive storage class as default. This storage class is designed for long-term storage of data that is rarely accessed and can be restored within several hours, offering the lowest cost storage for different access patterns. The S3 bucket is configured for website hosting and an S3 interface endpoint is created
upvoted 1 times
...
whenthan
1 year ago
Selected Answer: A
large documents storage - s3 and availability and speed of retrieval are no concerns and lowest cost...
upvoted 2 times
...
Simon523
1 year ago
Selected Answer: D
I think the key words are "Availability and speed of retrieval are not concerns" & "LOWEST cost". Of course user cannot directly access the file, for it require 12 hours to retrieval files, but cause the time is not concern, so I select "D".
upvoted 2 times
...
b3llman
1 year ago
A - Glacier Deep Archive will take too long and it will hit the request timeout limit for S3.
upvoted 1 times
...
chico2023
1 year ago
Selected Answer: A
This is so tricky, but I would also go with A. The only reason I go with A is that answer D has "Configure the S3 bucket for website hosting". This part doesn't make sense (unless they were storing other types of static content) as objects archived in Glacier DA have to be restored first. Seriously. If it wasn't that, I would go D.
upvoted 1 times
...
MRL110
1 year, 1 month ago
Since there is cost associated with One Zone-IA retrieval as well as interface endpoints, this should be B considering EFS One Zone-IA is cheaper than EBS SC1.
upvoted 2 times
MRL110
1 year, 1 month ago
Also, website access is not possible with interface-endpoints. (https://repost.aws/questions/QUu19UpXsTRnaPcg5biU54RA/s3-interface-endpoint)
upvoted 1 times
ninomfr64
8 months, 1 week ago
Correct, but enabling static web site doesn't prevent you to access objects via S3 api (scenario doesn't require uses to access content via web site) Also, B requires to run an EC2 that is making B and C more expensive than A and D
upvoted 1 times
...
...
Greyeye
1 year ago
us-east-1 EFS - One Zone-Infrequent Access Storage (GB-Month) $0.0133 EBS sc1 - $0.015 per GB-month of provisioned storage that pricing is very close...
upvoted 1 times
...
...
Russs99
1 year, 1 month ago
Selected Answer: A
As to D, S3 Glacier Deep Archive storage should not be used as the default storage for any daily usage. It is designed for long-term archiving of data that is rarely accessed. The default retrieval time for S3 Glacier Deep Archive items is 12 hours, which is too slow for most daily usage.
upvoted 1 times
...
khksoma
1 year, 1 month ago
It is A. https://tutorialsdojo.com/amazon-s3-vs-glacier/
upvoted 1 times
...
Magoose
1 year, 1 month ago
Selected Answer: D
i dont see anything saying you cant use web hosting with Deep archive. I believe the web hosting is seperate from the storage class.
upvoted 1 times
hirenshah005
1 year, 1 month ago
You are wrong Sir, Deep Archive can not make public objects even they are in Intranet
upvoted 1 times
...
...
Mom305
1 year, 1 month ago
Selected Answer: D
About enabling Website hosting in a S3 Bucket, remember that retrieving objects from Glacier Deep Archive will temporarily make the objects available into a Standard S3 bucket (which you can enable with Website hosting) https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html
upvoted 2 times
...
NikkyDicky
1 year, 2 months ago
Selected Answer: A
A D is not usable
upvoted 1 times
...
dkx
1 year, 2 months ago
S3 can be used to host static web content, while Glacier cannot. In S3, users create buckets. In Glacier, users create archives and vaults. https://tutorialsdojo.com/amazon-s3-vs-glacier/
upvoted 2 times
...
Jonalb
1 year, 2 months ago
Selected Answer: D
DDDDDDDDDDDDDD sorry guys
upvoted 1 times
...
Jonalb
1 year, 2 months ago
Selected Answer: A
The number of requests will be low!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! AAAAAAAAAAAAAAAAAAAAAAA
upvoted 1 times
...
[Removed]
1 year, 2 months ago
Selected Answer: A
D is ruled out by the need for no public access, so even though A is more expensive, it's the lowest cost suitable solution.
upvoted 1 times
...
Maria2023
1 year, 2 months ago
Selected Answer: A
I vote for A, mostly based on that sentence "Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 Glacier Deep Archive storage class as default" - I do not believe you can configure S3 bucket to use Glacier Deep Archive storage class as default. You need to set up lifecycle rules to transfer data to glacier. Plus the "website hosting" part
upvoted 5 times
...
Jackhemo
1 year, 2 months ago
Selected Answer: A
based on olabiba.ai: ased on the requirements and the need for the lowest cost solution, the most suitable option would be: A. Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint. Configure the S3 bucket to allow access only through that endpoint. This option allows you to store the archived documents in an S3 bucket with the One Zone-Infrequent Access storage class, which is cost-effective for long-term storage. By configuring the S3 bucket for website hosting, you can make the documents accessible through the corporate intranet. Creating an S3 interface endpoint ensures secure access through the VPC, and by configuring the S3 bucket to allow access only through that endpoint, you ensure that the data is not accessible to the public.
upvoted 1 times
...
tromyunpak
1 year, 2 months ago
S3 Glacier Deep Archive is a cost-effective and easy-to-manage alternative to tape. S3 Glacier Deep Archive delivers the lowest cost storage, up to 75% lower cost (than S3 Glacier Flexible Retrieval), for long-lived archive data that is accessed less than ONCE per year and is retrieved asynchronously. so Answer is A
upvoted 2 times
...
easytoo
1 year, 2 months ago
a-a-a-a-a-a-a-a-a-a-a-a-a
upvoted 2 times
...
Jesuisleon
1 year, 2 months ago
Selected Answer: A
I think A makes sense than D. Don't think s3 glacier deep tier is suitable for web hosting.
upvoted 1 times
...
emiliocb4
1 year, 3 months ago
Selected Answer: D
lower cost and no concerning on retrieval time
upvoted 1 times
...
rtguru
1 year, 3 months ago
The correct answer is D
upvoted 1 times
...
SkyZeroZx
1 year, 3 months ago
Selected Answer: D
D LOWEST Cost for me
upvoted 1 times
...
gonzjo52
1 year, 3 months ago
Nunca se menciono un sitio web, se dijo "intranet" a través de la VPN, por lo que no es necesario una web estaticas en el bucket. Elijo la opción D
upvoted 1 times
...
karma4moksha
1 year, 3 months ago
Option D: Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 Glacier Deep Archive storage class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint. Configure the S3 bucket to allow access only through that endpoint. This option would be less expensive than using other S3 storage classes, but it would still be more expensive than using S3 One Zone-IA storage. Additionally, using the S3 Glacier Deep Archive storage class would make retrieval of the documents slow and expensive, which does not meet the company's requirements. Therefore, this option is not the best fit for the company's requirements.Hence A
upvoted 2 times
...
AWS_Sam
1 year, 3 months ago
No doubt that the answer is D Glacier Deep Archive storage. Frontend access is not the main point of the question.
upvoted 2 times
...
gameoflove
1 year, 3 months ago
Selected Answer: D
D is the correct option as A is One Zone-Infrequent Access (S3 One Zone-IA) storage class which is not HA
upvoted 1 times
...
Maja1
1 year, 4 months ago
Selected Answer: A
Glacier can't handle web hosting. It's a trick question.
upvoted 5 times
...
devopsy
1 year, 4 months ago
if the question says archiving, most of the time the answer is glacier
upvoted 1 times
...
OnePunchExam
1 year, 4 months ago
Selected Answer: D
When tackling AWS questions, always note the key requirement which is the LOWEST cost. It even says number of requests is low, availability and speed are not of concern. Also don't make assumptions of the methods retrieval, it can always be a frontend to trigger restore and then once restore is complete, notify user to download. Frontend is not the main point of the question, here we want to provide solution for storage archival at CHEAPEST cost.
upvoted 7 times
...
Asagumo
1 year, 4 months ago
Selected Answer: D
Within a single bucket, the object for website hosting is not necessarily a Glacier Deep Archive storage class. Since the purpose is to store archived documents, you can assume long-term storage.
upvoted 1 times
...
hobokabobo
1 year, 4 months ago
Selected Answer: A
While D is most probably the cheapest solution. When you try to download an object in deep archive you get a warning that it is not possible. You need to retrieve it: got to actions and restore which will need at least 12 hours for *deep* archive. Only after that you can access the document. The answer D says enable webhosting: thats afaik not going to work but users will end up in above mentioned warning. Therefore we need to go for A which is not as cheap but users can access the documents.
upvoted 4 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: A
A makes more sense than D.. Deep Archive retrieval time is 12 hours and I’m not sure it’s possible to host static website in such long retrieval time!
upvoted 3 times
...
vherman
1 year, 5 months ago
Selected Answer: A
A is the only correct. I looked up the AWS docs... S3 Glacier Deep Archive is a completely separate service that does not support web hosting.
upvoted 5 times
...
Dimidrol
1 year, 5 months ago
A , i created bucket with web hosting and put some html pages in glacier deep archive and had 403 error, operation invalid for object storage class
upvoted 8 times
...
Damijo
1 year, 5 months ago
D - S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed. Question says availability and speed of retrieval are not concerns of the company.
upvoted 1 times
...
vherman
1 year, 5 months ago
Selected Answer: D
Availability and speed of retrieval are not concerns of the company. but they did not mention high durability which is not provided by OneZone-IA
upvoted 1 times
vherman
1 year, 5 months ago
A is the only correct. I looked up the AWS docs... S3 Glacier Deep Archive is a completely separate service that does not support web hosting.
upvoted 1 times
...
...
limjieson
1 year, 5 months ago
D is c orrect
upvoted 1 times
...
zejou1
1 year, 5 months ago
Selected Answer: A
https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html Store large number of archived docs, and available through corp intranet. Copies of data held on physical media elsewhere (could be re-created). Requests low (but it doesn't say RARE so think monthly/quarterly). "AVAILABILITY" and speed of retrieval are not concerns. It is A, yes Glacier is "cheaper", but I have to leave the archives for at least 180 days, would be available on corp intranet and it is more cost-effective if I want to migrate the data to Glacier if I monitor use and see it is "rarely" touched and know I have to hold it due to regulatory for at minimal 180 days.
upvoted 4 times
...
kiran15789
1 year, 5 months ago
Selected Answer: A
will go with A considering following hints 1) data is copy of somethign stored else where (hints to One zone) 2) traffic is low (but it still exist) 3) minimum storage duration D might alos be correct but i will select A in exam
upvoted 5 times
...
cudbyanc
1 year, 6 months ago
Selected Answer: D
This solution provides cost-effective storage for the archived documents using the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class, which is the lowest cost storage option for infrequently accessed data in a single availability zone. Hosting the S3 bucket as a website enables easy access to the documents through the intranet, and creating an S3 interface endpoint ensures that access is only possible through the VPN attached to the VPC. Additionally, S3 provides built-in security features, such as bucket policies and access control lists (ACLs), to control access to the data.
upvoted 2 times
...
Sarutobi
1 year, 6 months ago
Selected Answer: A
I will use A, but the question does not specify how often the files are retrieved. If they are retrieved frequently A for sure if they aren't then D.
upvoted 4 times
Ajani
1 year, 5 months ago
https://www.linkedin.com/pulse/s3-standard-more-cost-effective-than-glacier-jon-bonso; Definitely A: Glacier has the highest minimum storage duration, which is 180 days, it becomes cost prohibitive if you factor in retrieval costs
upvoted 3 times
Sarutobi
1 year, 4 months ago
Exactly, lol.
upvoted 1 times
...
...
...
God_Is_Love
1 year, 6 months ago
Tricky one - Glacier storage class has different levels which can fetch documents quickly with instant retrieval too. so many people go for A but answer is D to save more!. - https://aws.amazon.com/s3/storage-classes/glacier/
upvoted 1 times
c73bf38
1 year, 6 months ago
I'm on the fence on this question, Option A is offering a Single AZ S3 bucket with infrequent access that has the feature to enable web hosting. I can't find a web hosting feature with any of the archive classes unless the archive is restored and transitioned back to the standard class.
upvoted 2 times
...
...
PSPaul
1 year, 6 months ago
D is good! Keyword is "speed of retrieval are not concerns" So, Glacier Deep Archive is the choice.
upvoted 1 times
...
saurabh1805
1 year, 6 months ago
Selected Answer: D
Lowest cost gives the hint. it should be option D.
upvoted 1 times
...
saurabh1805
1 year, 6 months ago
Lowest cost gives the hint. it should be option D.
upvoted 1 times
...
kiran15789
1 year, 6 months ago
Selected Answer: D
The employees can connect via intranet point to note is it's not via web application, so ppl can wait 12hours to get the documents for the lowest storage cost
upvoted 1 times
...
c73bf38
1 year, 6 months ago
Confused by why everyone thinks it's D, reading the doc it says the use case and the minimum archive period is 90 days. https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html Number of days you plan to keep objects archived – S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive are long-term archival solutions. The minimal storage duration period is 90 days for the S3 Glacier Flexible Retrieval storage class and 180 days for S3 Glacier Deep Archive. Deleting data that is archived to Amazon S3 Glacier doesn't incur charges if the objects you delete are archived for more than the minimal storage duration period. If you delete or overwrite an archived object within the minimal duration period, Amazon S3 charges a prorated early deletion fee. For information about the early deletion fee, see the
upvoted 1 times
...
c73bf38
1 year, 6 months ago
Selected Answer: A
The requirements are to store a large number of archived documents that are not publicly accessible, and make them available to employees through a corporate intranet. As the number of requests is low and speed of retrieval is not a concern, we can use the low-cost S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class. We can configure the S3 bucket for website hosting and create an S3 interface endpoint to allow access to the documents only through the corporate intranet. This solution is the lowest cost as it eliminates the need to launch and manage EC2 instances. Option B and C involve launching an EC2 instance which increases the operational overhead and is more expensive than using S3. Also, EFS One Zone-IA storage class is not recommended for storing large files. Option D involves using the S3 Glacier Deep Archive storage class which is intended for long-term archival storage of data and not suitable for retrieving data frequently.
upvoted 4 times
MRL110
1 year ago
S3 interface endpoint doesn't support web hosting. The question does not say large files, but large number of archived documents, which could be small-sized. Hence EFS OZ-IA (being cheaper than SC1) could be the right answer.
upvoted 1 times
...
...
brfc
1 year, 6 months ago
I was going for D but due to retrieval costs I'm now leaning towards A
upvoted 3 times
...
Sara_swa
1 year, 6 months ago
B - since 'The data must not be accessible to the public' and EFS one zone IA is cheaper than EBS sc1
upvoted 1 times
...
oatif
1 year, 6 months ago
Selected Answer: A
D does not make any sense 12-48 hours of data retrieval time is absurd - hence A.
upvoted 1 times
...
DWsk
1 year, 6 months ago
I'm really unsure on this one. I can't find a definitive answer whether you can use glacier to host web content. It would make sense you can't because you need to restore the data before retrieving it, but theoretically it could be possible with application logic. This question feels like its a gotcha that is using Glacier as a red herring for CHEAPEST option but really wants you to use One Zone IA.
upvoted 1 times
cloudman
1 year, 6 months ago
Read this : The number of requests will be low. Availability and speed of retrieval are not concerns of the company & LOWEST COST I see its D
upvoted 2 times
...
anita_student
1 year, 6 months ago
You can't use Glacier (in any flavor) for website hosting
upvoted 2 times
...
...
Shahul75
1 year, 6 months ago
Selected Answer: C
It should C. taking of the wrong ones, * S3 interface doesn't support website endpoints * EFS One Zone-IA is expensive than SC1 Only one is left, which is "C"
upvoted 1 times
...
bititan
1 year, 7 months ago
Selected Answer: A
web hosting not possible with deep archive objects. so I go for option A. Question is not about archival solution. it's about accessing data from vpc based application whilst maintaining lowest
upvoted 4 times
...
masetromain
1 year, 7 months ago
Selected Answer: D
The S3 Glacier Deep Archive storage class is the lowest-cost storage class offered by Amazon S3, and it is designed for archival data that is accessed infrequently and for which retrieval time of several hours is acceptable. S3 interface endpoint for the VPC ensures that access to the bucket is only from resources within the VPC and this will meet the requirement of not being accessible to the public. And also, S3 bucket can be configured for website hosting, and this will allow employees to access the documents through the corporate intranet. Using an EC2 instance and a file system or block store would be more expensive and unnecessary because the number of requests to the data will be low and availability and speed of retrieval are not concerns. Additionally, using Amazon S3 bucket will provide durability, scalability and availability of data.
upvoted 3 times
...
masetromain
1 year, 8 months ago
Selected Answer: D
The number of requests will be low. Availability and speed of retrieval are not concerns of the company. Which solution will meet these requirements at the LOWEST cost? I go with D
upvoted 4 times
zhangyu20000
1 year, 8 months ago
one bucket with deep glacier by default, can this bucket use web hosting?
upvoted 2 times
bjct
1 year, 8 months ago
Yes we can use one bucket with different storage class to store objects as per s3 lifecycle policy.
upvoted 1 times
...
...
masetromain
1 year, 7 months ago
The S3 Glacier Deep Archive storage class is the lowest-cost storage class offered by Amazon S3, and it is designed for archival data that is accessed infrequently and for which retrieval time of several hours is acceptable. S3 interface endpoint for the VPC ensures that access to the bucket is only from resources within the VPC and this will meet the requirement of not being accessible to the public. And also, S3 bucket can be configured for website hosting, and this will allow employees to access the documents through the corporate intranet. Using an EC2 instance and a file system or block store would be more expensive and unnecessary because the number of requests to the data will be low and availability and speed of retrieval are not concerns. Additionally, using Amazon S3 bucket will provide durability, scalability and availability of data.
upvoted 1 times
...
...
Question #21 Topic 1

A company is using an on-premises Active Directory service for user authentication. The company wants to use the same authentication service to sign in to the company’s AWS accounts, which are using AWS Organizations. AWS Site-to-Site VPN connectivity already exists between the on-premises environment and all the company’s AWS accounts.
The company’s security policy requires conditional access to the accounts based on user groups and roles. User identities must be managed in a single location.
Which solution will meet these requirements?

  • A. Configure AWS IAM Identity Center (AWS Single Sign-On) to connect to Active Directory by using SAML 2.0. Enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using attribute-based access controls (ABACs).
  • B. Configure AWS IAM Identity Center (AWS Single Sign-On) by using IAM Identity Center as an identity source. Enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using IAM Identity Center permission sets.
  • C. In one of the company’s AWS accounts, configure AWS Identity and Access Management (IAM) to use a SAML 2.0 identity provider. Provision IAM users that are mapped to the federated users. Grant access that corresponds to appropriate groups in Active Directory. Grant access to the required AWS accounts by using cross-account IAM users.
  • D. In one of the company’s AWS accounts, configure AWS Identity and Access Management (IAM) to use an OpenID Connect (OIDC) identity provider. Provision IAM roles that grant access to the AWS account for the federated users that correspond to appropriate groups in Active Directory. Grant access to the required AWS accounts by using cross-account IAM roles.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (79%)
8%
8%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/74174-exam-aws-certified-solutions-architect-professional-topic-1/ Both option C and option A are valid solutions that meet the requirements for the scenario. ABAC, or attribute-based access control, is a method of granting access to resources based on the attributes of the user, the resource, and the action. This allows for fine-grained access control, which can be useful for implementing a security policy that requires conditional access to the accounts based on user groups and roles. AWS IAM Identity Center (AWS SSO) allows you to connect to your on-premises Active Directory service using SAML 2.0. With this, you can enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol, which allows for the management of user identities in a single location.
upvoted 27 times
masetromain
1 year, 7 months ago
In option C, the company will use IAM to use a SAML 2.0 identity provider, and it will use the appropriate groups in Active Directory to grant access to the required AWS accounts by using cross-account IAM users. In this way, it can implement its security policy of conditional access to the accounts based on user groups and roles. In summary, both option A and C are valid solutions, both of them allow you to use your on-premises Active Directory service for user authentication, and both of them allow you to manage user identities in a single location and grant access to the AWS accounts based on user groups and roles.
upvoted 2 times
...
...
bititan
Highly Voted 1 year, 7 months ago
Selected Answer: A
A is has options for SAML and SCIM configuration with AD C is all about users and no roles are mentioned. AD User attributes cannot be mapped to IAM users direct D is openID based, MS AD would not support this so I go with A
upvoted 13 times
trap
9 months, 2 weeks ago
native AD doesn't support SAML 2.0 without an ADFS server. SCIM is also not supported at all. SCIM provisioning is supported by other IDPs like Azure AD
upvoted 3 times
gonzjo52
4 months, 2 weeks ago
Si, si son compatibles. https://aws.amazon.com/es/directoryservice/faqs/
upvoted 1 times
...
trap
9 months, 2 weeks ago
https://docs.aws.amazon.com/singlesignon/latest/userguide/supported-idps.html
upvoted 2 times
...
...
...
Ashu_0007
Most Recent 1 week, 4 days ago
AWS IAM Identity Center + SAML
upvoted 1 times
...
Vaibs099
7 months ago
A is correct Reasons - Option A mentions about Active Directory as identity Source configuration which solves the purpose of establishing trust and sync from on prem AD using Directory Service. Solves the purpose of using on-prem AD as Single Sign On asked in the question. It is also mentioned that AWS org is in place, which works well with AWS Identity Centre. Gives another validation. It gives us hint of efficiently managing AWS Org accounts / OUs with Identity Centre (Permission Set behind the scene ) to manage RBAC within accounts. Finally this line - "The company's security policy requires conditional access to the accounts based on user groups and roles." is talking about conditional access which can only be solved by ABAC(Attribute Based Access Control). For example user with green attribute should only get access to resources with green attribute. This can be solved by Tag functionality within AWS Identity Centre.
upvoted 2 times
...
atirado
8 months, 1 week ago
Selected Answer: D
Option A - This option works however it moves authentication and managing user identities from Active Directory to Identity Center but the question states the company wants to use the same authentication service to sign into AWS in reference to Active Directory Option B - This option works but it moves user identity management and authentication tie Identity Center which is not what the question states the company wants to do Option C - This option does not work because in AWS you provision cross-account IAM roles rather than users. Option D - This option might work but it is missing AD FS, a component that enables OIDC flows in AD. Otherwise it maintains user identity management in one place and allows the company to keep using Active Directory for authentication as the question states
upvoted 2 times
...
ninomfr64
8 months, 1 week ago
Selected Answer: B
Didn't spent time checking if C and D works, because when you have an AWS Organitazion and need to use AD to sign-in to the company’s AWS accounts AWS IdC is the way to go. Now, with AWS IdC we need ADFS and while ADFS does not support SCIM, it is possible to still have your users and groups automatically synchronize with the IAM IDC by using the SCIM API and PowerShell as per https://aws.amazon.com/blogs/modernizing-with-aws/synchronize-active-directory-users-to-aws-iam-identity-center-using-scim-and-powershell/#:~:text=While%20ADFS%20does%20not%20support,the%20SCIM%20API%20and%20PowerShell. Finally, ABAC is an authorization strategy and it is not alternative to IdC Permission Sets. Also the scenario requires conditional access to the accounts based on user groups and roles, this point me to RBAC strategy. I would pick ABAC if the request mentioned user attributes like Department, Cost Center or Project thus.
upvoted 2 times
ninomfr64
6 months, 4 weeks ago
After reviewing it, the correct answer is A. "User identities must be managed in a single location" -> "Configure AWS IAM Identity Center (AWS Single Sign-On) to connect to Active Directory by using SAML 2.0" while B states "Configure AWS IAM Identity Center (AWS Single Sign-On) by using IAM Identity Center as an identity source". Using AWs IdC as identity source will not meet requirement to manage all users in a single place
upvoted 1 times
...
...
924641e
8 months, 2 weeks ago
Answer A for AWS SSO would the right answer at first glance since IAM roles can be mapped to AD groups but it would require additional AD functions like ADFS for SCIM so the next best option is D.
upvoted 3 times
...
subbupro
8 months, 3 weeks ago
A is a correct one, because need to use the SAML for single sign on from the on-premise directory and also C is not correct because the federated should not come in to the picture federated is for only facebook, twitter, gmail account sign on - but we should use the companies active directory, so A is a correct one.
upvoted 1 times
...
siasiasia
9 months ago
Selected Answer: C
AD and SCIM don't go together so forget A and B. I've never seen a document talking about integrating OpenID with AWS account login so D is also out. C is doable so I go with C.
upvoted 1 times
gonzjo52
4 months, 2 weeks ago
P: ¿Puedo usar la autenticación basada en lenguaje de marcado de aserción de seguridad (SAML) 2.0 con aplicaciones de la nube que usen AWS Managed Microsoft AD? Sí. Puede usar los servicios federados de Microsoft Active Directory (AD FS) para Windows 2016 con su dominio administrado de AWS Managed Microsoft AD para autenticar usuarios en aplicaciones en la nube compatibles con SAML. https://aws.amazon.com/es/directoryservice/faqs/
upvoted 1 times
...
...
sizzla83
9 months ago
I am with B on this one. A is incorrect because you can only use ABAC (Attribute-Based Access Control) with IAM Identity Center Identity Store NOT with Active Directory
upvoted 1 times
ninomfr64
8 months, 1 week ago
Agree with you on B, but: - You can use IAM Identity Center to manage access to your AWS resources across multiple AWS accounts using user attributes that come from any IAM Identity Center identity source - https://docs.aws.amazon.com/singlesignon/latest/userguide/abac.html - ABAC is an authorization strategy that defines permissions based on attributes and it is implemented using IdC Permission Sets.
upvoted 1 times
...
...
enk
9 months ago
Selected Answer: A
As mentioned, SAML 2.0 doesn't directly integrate with AD and requires ADFS proxy as a go between, so the lack of ADFS being mentioned in A or B is throwing people off. However, AD on-premise with direct/VPN connectivity...IAM identify center is the way to go for SSO. I believe ADFS is implied when the question casually mentions "IAM Identify Center connect to AD using SAML 2.0".
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: A
federated IdP is required and access to multiple accounts
upvoted 1 times
...
trap
9 months, 2 weeks ago
Answer A and B are wrong!!! Active Directory doesn't support SAML without the use of Active Directory Federation Server!! SCIM is also not supported. The articles that all are pasting here mention the need of an AD connect or the trust between the local AD and an AWS managed Microsoft AD which is not the case here. C is also wrong. Cross account IAM users option doesn't exist. The correct is D!! You can use an OpenID Connect (OIDC) identity provider (e.g OKTA or Azure AD) and sync AD groups in it. You can then use cross account roles to grant access to the federated users https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html https://help.okta.com/en-us/content/topics/directory/ad-agent-manage-users-groups.htm https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_aws-accounts.html
upvoted 3 times
...
M4D3V1L
10 months, 4 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/singlesignon/latest/userguide/onelogin-idp.html#onelogin-passing-abac
upvoted 1 times
...
imvb88
11 months ago
Selected Answer: A
A: combination SSO + SAML2.0 + AD sounds correct. Automatic provisioning with SCIM means creating users and groups that synced with AD. ABAC seems not too fit for this as the requirements is "requires conditional access to the accounts based on user groups and roles" but that already satisfied with SCIM. B: "use Identity Center as an identity source" -> not using on premise AD -> wrong D: use OIDC -> wrong as on premise AD does not support OIDC. Cannot find an exact source for this but ChatGpt says so.. C: creating users mapped to federated users sounds red flags. Could have been correct if it was "creating roles", the same way with the classic "creating roles for EC2 to access S3 instead of user..." Conclusion: A
upvoted 3 times
...
whenthan
12 months ago
Selected Answer: C
More compreshensive approach how to map users, grant access based on groups, and utilize cross-account IAM users.
upvoted 2 times
...
whenthan
12 months ago
C provides more comprehensve approach
upvoted 1 times
...
bur4an
12 months ago
Selected Answer: A
A. Configure AWS IAM Identity Center (AWS Single Sign-On) to connect to Active Directory by using SAML 2.0. Enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using attribute-based access controls (ABACs). Option B does not mention the use of SAML integration with Active Directory, which is needed for the company's requirement of using the existing Active Directory for user authentication. Option C involves managing cross-account IAM users, which can be more complex and less centralized compared to using a dedicated identity service like AWS SSO. Option D involves OpenID Connect (OIDC), which is not mentioned as a requirement, and using cross-account IAM roles. While IAM roles are a valid way to grant access, the solution provided in option A offers a more centralized and streamlined approach through AWS SSO.
upvoted 1 times
...
venvig
1 year ago
Option C is NOT correct because of the following reasons While IAM can use a SAML 2.0 identity provider for federation, managing cross-account IAM users introduces complexity and can be challenging. Provisioning IAM users mapped to federated users is a manual, cumbersome process. Managing user identities across multiple AWS accounts rather than a single location doesn't align well with the company's requirement. It may not easily provide the granular, conditional access based on user groups and roles in the Active Directory, especially across multiple accounts. So, Answer A is correct AWS Single Sign-On (SSO) is designed to integrate with identity sources, including on-premises Active Directory, via SAML 2.0. AWS SSO supports automatic provisioning with SCIM. With AWS SSO, you can grant access to AWS accounts using attribute-based access controls (ABACs), which provides the conditional access based on user groups and roles. It meets the requirement of managing user identities in a single location.
upvoted 1 times
...
chico2023
1 year ago
Selected Answer: A
Answer: A "The company’s security policy requires conditional access to the accounts based on **user groups and roles**" C would require an IdP to work.
upvoted 1 times
...
awsrd2023
1 year, 1 month ago
Selected Answer: A
Perfectly matches the requirement
upvoted 1 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: A
it's A
upvoted 1 times
...
javitech83
1 year, 2 months ago
Selected Answer: B
B is perfectly possible, we use it in my organization. AD could be possible but A is easier to implemente and fully ocvers the requirement. It uses same authentication service, users are only managed in the active directory, and permissions are assigned based on the Active Directory groups that the user belongs to, adn that are synchronized with AWS SSO using SSO and permission sets.
upvoted 1 times
...
Jonalb
1 year, 2 months ago
Selected Answer: A
Here's how this solution satisfies the requirements: Connect to Active Directory: AWS IAM Identity Center (AWS Single Sign-On) can be configured to integrate with Active Directory using SAML 2.0. This allows for the synchronization of user identities and authentication with the on-premises Active Directory service. Automatic provisioning: By enabling automatic provisioning using the SCIM v2.0 protocol, user identities can be automatically provisioned and deprovisioned based on changes in the Active Directory. This ensures that user management remains centralized in a single location. Attribute-based access controls (ABACs): AWS IAM Identity Center supports ABACs, which allow for conditional access to AWS accounts based on user groups and roles. This enables fine-grained control over access to the AWS resources based on attributes associated with the user identities in the Active Directory.
upvoted 1 times
...
Maria2023
1 year, 2 months ago
Selected Answer: A
Initially I went for B, because I use permissionsets to assign policies in AD-to-AWS integrations. But that part - "Configure AWS IAM Identity Center (AWS Single Sign-On) by using IAM Identity Center as an identity source" means to abandon SAML and SCIM. Think the question is trick by nature and neither answer is completely right. You don't definitely need to use attributes - standard scenario is to provision users and groups and assign groups to accounts and permissionsets.
upvoted 1 times
...
geo1551
1 year, 2 months ago
B https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetsconcept.html
upvoted 2 times
...
johnballs221
1 year, 2 months ago
Selected Answer: D
I think A is wrong because ABAC refers to utilizing tags for access control, in this case we are required to use access control based on roles and groups, which is RBAC.
upvoted 1 times
...
chathur
1 year, 2 months ago
Selected Answer: A
The fill guide is here. https://aws.amazon.com/blogs/security/configure-aws-sso-abac-for-ec2-instances-and-systems-manager-session-manager/
upvoted 1 times
...
emiliocb4
1 year, 3 months ago
Selected Answer: B
B because AWS IAM Identity Center (AWS Single Sign-On) and to manage in a single point the user permission with the permission set. I'm using the same in my organization.
upvoted 4 times
sizzla83
9 months ago
I am with B on this one. A is incorrect because you can only use ABAC (Attribute-Based Access Control) with IAM Identity Center Identity Store NOT with Active Directory.
upvoted 1 times
...
...
rtguru
1 year, 3 months ago
I go with C
upvoted 1 times
...
aca1
1 year, 3 months ago
Selected Answer: A
I will go with A. When I look to this "conditional access to the accounts based on user groups and roles", this a conditional access based on groups and roles, this is clear ABAC, access base on some conditions/attributes from the user. For example: if the user has a role as Manager (attribute) and a group as Finance (attribute), then the user access an AWS Resource. Looking to "A company is using an on-premises Active Directory service for user authentication" this is SAML. To simplify all this integration AWS IAM Identity Center (AWS Single Sign-On)
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: D
in Option A, ABAC required tags and Federated user used to filter users based on it however Option D is the right option as per my knowledge
upvoted 1 times
...
huanaws088
1 year, 4 months ago
Selected Answer: A
it is A , I only vote A to increment Rate for A https://aws.amazon.com/vi/blogs/aws/new-attributes-based-access-control-with-aws-single-sign-on/
upvoted 3 times
...
jj22222
1 year, 4 months ago
Selected Answer: D
I think its D
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: A
A is the best choice.
upvoted 2 times
...
Dimidrol
1 year, 5 months ago
Selected Answer: C
A and B are wrong. https://docs.aws.amazon.com/singlesignon/latest/userguide/supported-idps.html
upvoted 1 times
Scoobyben
1 year, 2 months ago
That page of documentation appears to be for IdPs *excluding* AD - which gets its own page further up in the docs: https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-ad.html "If you're using a self-managed directory in Active Directory or an AWS Managed Microsoft AD, see Connect to a Microsoft AD directory. For other external identity providers (IdPs), you can use AWS IAM Identity Center (successor to AWS Single Sign-On) to authenticate identities from the IdPs through the Security Assertion Markup Language (SAML) 2.0 standard. "
upvoted 1 times
...
Dimidrol
1 year, 5 months ago
Changed to D. https://aws.amazon.com/ru/blogs/security/aws-federated-authentication-with-active-directory-federation-services-ad-fs/
upvoted 2 times
...
...
mKrishna
1 year, 5 months ago
ANS is B Option A is incorrect because it suggests using SAML 2.0 for authentication but does not address the requirements for managing user identities in a single location or providing conditional access based on user groups and roles. Option C is incorrect because it suggests creating cross-account IAM users, which would require duplicating user identities across AWS accounts, defeating the purpose of using a single location for managing user identities. Option D is incorrect because it suggests using an OpenID Connect (OIDC) identity provider, which does not integrate with Active Directory.
upvoted 3 times
hobokabobo
1 year, 4 months ago
"Connect (OIDC) identity provider, which does not integrate with Active Directory.": You seriously think AD does not support OIDC? Defacto standard besides SAML in most big companies which need a unified solution almost every software supports.
upvoted 1 times
...
...
cudbyanc
1 year, 6 months ago
Selected Answer: A
AWS Single Sign-On (SSO) is designed to manage access to multiple AWS accounts and business applications, and it allows users to sign in once using their existing credentials, including those from Active Directory. By configuring AWS SSO to connect to Active Directory by using SAML 2.0, the user identities can be managed in a single location. Additionally, automatic provisioning can be enabled using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. This will ensure that users are created and updated in AWS SSO based on changes in Active Directory.
upvoted 2 times
...
hobokabobo
1 year, 6 months ago
Selected Answer: D
A: imo not possible with on premises AD (SCIM not supported) B: imo not possible with on premises AD (SCIM not supported) C: "Provision users in IAM" violates the requirement of central user Management. D: OIDC may be an ugly pig but works. Usage of roles removes the necessity of maintaining users in AWS. (admitted: A would be much nicer if it was possible. )
upvoted 2 times
...
God_Is_Love
1 year, 6 months ago
Logical answer : SAML, Existing Active Directory services authentication mechanism and ABAC are key terms for the requirement. A fits well. D is wrong because, OIDC does not need to be implemented as the auth mechanism is already in place with AD. OIDC does not jell with Active Directory. AD and SAML is a workable solution though.
upvoted 1 times
...
tman22
1 year, 8 months ago
C. On-premises Active Directory does not support SCIM or OIDC. Azure AD is not mentioned.
upvoted 4 times
...
aragon_saa
1 year, 8 months ago
I choose A
upvoted 2 times
...
masetromain
1 year, 8 months ago
Selected Answer: A
I prefer to go to answer A for ABAC https://docs.aws.amazon.com/singlesignon/latest/userguide/scim-profile-saml.html https://docs.aws.amazon.com/singlesignon/latest/userguide/abac.html
upvoted 5 times
masetromain
1 year, 8 months ago
https://www.examtopics.com/discussions/amazon/view/74174-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 2 times
...
...
Question #22 Topic 1

A software company has deployed an application that consumes a REST API by using Amazon API Gateway, AWS Lambda functions, and an Amazon DynamoDB table. The application is showing an increase in the number of errors during PUT requests. Most of the PUT calls come from a small number of clients that are authenticated with specific API keys.
A solutions architect has identified that a large number of the PUT requests originate from one client. The API is noncritical, and clients can tolerate retries of unsuccessful calls. However, the errors are displayed to customers and are causing damage to the API’s reputation.
What should the solutions architect recommend to improve the customer experience?

  • A. Implement retry logic with exponential backoff and irregular variation in the client application. Ensure that the errors are caught and handled with descriptive error messages.
  • B. Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without error.
  • C. Turn on API caching to enhance responsiveness for the production stage. Run 10-minute load tests. Verify that the cache capacity is appropriate for the workload.
  • D. Implement reserved concurrency at the Lambda function level to provide the resources that are needed during sudden increases in traffic.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (73%)
A (26%)
1%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: B
API throttling is a technique that can be used to control the rate of requests to an API. This can be useful in situations where a small number of clients are making a large number of requests, which is causing errors. By implementing API throttling through a usage plan at the API Gateway level, the solutions architect can limit the number of requests that a client can make, which will help to reduce the number of errors. It's important that the client application handles the code 429 replies without error, this will help to improve the customer experience by reducing the number of errors that are displayed to customers. Additionally, it will prevent the API's reputation from being damaged by the errors.
upvoted 39 times
masetromain
1 year, 7 months ago
It is important to note that other solutions such as retry logic with exponential backoff and irregular variation in the client application or turn on API caching to enhance responsiveness for the production stage may help to improve the customer experience and reduce errors, but they do not address the root cause of the problem which is a large number of requests coming from a small number of clients. Implementing reserved concurrency at the Lambda function level can provide resources that are needed during sudden increases in traffic, but it does not address the issue of a client making a large number of requests and causing errors.
upvoted 15 times
...
...
zhangyu20000
Highly Voted 1 year, 8 months ago
B is correct. API gateway throttling is applied to single account - https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html. Retry will make it even worse.
upvoted 8 times
...
Ashu_0007
Most Recent 1 week, 4 days ago
API gateway throttling
upvoted 1 times
...
Jason666888
3 weeks, 2 days ago
Selected Answer: B
Key word: a large number of the PUT requests, one client Seeing this will ring a bell on throttling on API Gateway. But normally you also need to make sure when the client side see "429 too many attempts", the app can capture that error code and show some reasonable error message(e.g. You have sent too many requests .Please try again later)
upvoted 2 times
...
gofavad926
5 months, 1 week ago
Selected Answer: B
B. C only will help with GET requests, and A and D don't prevent it
upvoted 1 times
...
anubha.agrahari
5 months, 3 weeks ago
Selected Answer: B
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
upvoted 1 times
...
duriselvan
6 months, 1 week ago
B ans : https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
upvoted 1 times
...
AimarLeo
6 months, 3 weeks ago
This question missing MASSIVE information.. none of the answers can fulfil the requirements..
upvoted 1 times
...
bjexamprep
7 months, 1 week ago
Selected Answer: A
There is no evidence indicating the problem is with the throughput. If it is throughput, other clients will have similar problem. And “the errors are displayed to customers and are causing damage to the API’s reputation.”, this means the solution should be able to reduce the error message showed on the client side, while, throttling the client will actually close the service for this particular client, which is against the “clients can tolerate retries of unsuccessful calls”. I vote A for this question.
upvoted 1 times
...
sarfaraz_khan
8 months, 1 week ago
The solutions architect should recommend option B: Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without error. Option B is the most directly related recommendation to improving the customer experience, as it addresses the issue of API rate limiting and ensures a more predictable and controlled experience for users.
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: B
Option A - This option will make retries take longer on each retry for all clients rather than for the few causing issues in the application Option B - This option will work: An usage plan will allow throttling requests from specific clients identified by their API Key and ensuring client applications can handle throttling errors provides a consistent experience Option C - This option has no relation with the problem at hand Option D - This option assumes there is a capacity issue managing the increase in volumes but given that errors occur due to a small number of clients then reserved concurrency will not address the cause of the issue
upvoted 2 times
...
atirado
8 months, 1 week ago
Selected Answer: B
Option A - This option will make retries take longer on each retry for all clients rather than for the few causing issues in the application Option B - This option will work: An usage plan will allow throttling requests from specific clients identified by their API Key and ensuring client applications can handle throttling errors provides a consistent experience Option C - This option has no relation with the problem at hand Option D - This option assumes there is a capacity issue managing the increase in volumes but given that errors occur due to a small number of clients then reserved concurrency will not address the cause of the issue
upvoted 1 times
...
ninomfr64
8 months, 1 week ago
Selected Answer: B
Usage Plan throttling prevents a group of users or a single user to saturate the API concurrency capacity. Thus B. Also A and D can help in this scenario, but they will have less benefit with respect to B. While C does not help in this scenario as I do not see how API Gateway caching can help PUT requests
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: B
obvious
upvoted 1 times
...
whenthan
12 months ago
Selected Answer: B
Implementing API throttling through a usage plan at the API Gateway level would directly address the issue of too many requests from a single client causing errors. Properly handling status code 429 can help clients understand the situation, and throttling ensures fair usage and prevents overload, ultimately improving the customer experience.
upvoted 1 times
...
bur4an
12 months ago
Selected Answer: B
B. Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without error. Options A and D might help with general improvements in resilience and resource allocation, but they do not specifically address the issue of a single client causing a large number of errors. Option C, involving API caching, is not the most appropriate solution in this scenario, as caching might not directly address the issue of the client generating a high volume of errors. It might improve responsiveness for frequently accessed data, but it doesn't directly address the issue of client errors.
upvoted 2 times
...
CloudHandsOn
1 year ago
Selected Answer: B
B. The error message is damaging the reputation, which is the icing on the cake when deciding between A and B. One option continues to show an error , which will continue to damage the reputation. Option A will not show an error to the end user, and will handle the issue.
upvoted 1 times
CloudHandsOn
1 year ago
CORRECTION - "Option B will not show an error.."
upvoted 1 times
...
...
chico2023
1 year ago
Selected Answer: B
Answer: B It's not clear what error customers are getting. We can guess, however, that it is related to throttling: "A solutions architect has identified that a large number of the PUT requests originate from one client." The usual way to handle throttling is by using an exponential backoff technique, which answer A, however, if I want to avoid, or limit throttling to all clients and improve the reputation of my API, I would go with answer B, which limits calls, impacting only the culprits and, also handles 429 without error (which makes me assume that my application will catch the error and will retry).
upvoted 2 times
...
Piccaso
1 year ago
Selected Answer: B
code 429 means "Too many requests"
upvoted 2 times
...
aviathor
1 year, 1 month ago
Selected Answer: A
There is no indication in the problem statement that the errors are caused by the API being overwhelmed with requests. It also states that the errors being displayed to the user are damaging to the applications's reputation. Therefore the priority should be to avoid the errors being reported to the users, hence A.
upvoted 4 times
Impromptu
8 months, 1 week ago
I agree Also with option B you would introduce a (potentially) extra error: the 429 error because of the throttling. You would then catch this and solve it. This would work if we knew the original problem was also the 429 error, but in other cases this does not solve it. We don't know whether the applied throttling is sufficient, etc. The exponential backoff retries will increase the number of request but will do this in a fashion that spreads the load over a longer timeframe, also given the irregular variation. Moreover, the question states that clients can tolerate retries. And catching the errors and displaying descriptive error messages improves user experience, also for non-429 errors.
upvoted 2 times
...
...
NikkyDicky
1 year, 1 month ago
Selected Answer: B
B - because of the issue with large number of requests from small number of clients
upvoted 1 times
...
nqg54118
1 year, 2 months ago
Selected Answer: A
exponential backoff https://docs.aws.amazon.com/ja_jp/sdkref/latest/guide/feature-retry-behavior.html
upvoted 1 times
...
dev112233xx
1 year, 3 months ago
Selected Answer: A
A makes more sense
upvoted 1 times
chikorita
1 year, 3 months ago
no bro
upvoted 1 times
aviathor
1 year, 1 month ago
Not helpful... :)
upvoted 1 times
...
...
...
liangcw305
1 year, 3 months ago
Selected Answer: B
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
upvoted 1 times
...
OnePunchExam
1 year, 4 months ago
Selected Answer: A
- B is incorrect. We use throttling for APIs to help protect them from being overwhelmed by too many requests (which is not the issue here!). Also the question did not say error 429 is going to be returned. - With retry, there is a chance the API will work thus resulting in successful response. - Also if all else fails, return descriptive error messages is more elegant then throwing unhandled exceptions.
upvoted 5 times
aviathor
1 year, 1 month ago
That was my thought too. It is not possible to conclude from the problem statement that the errors are caused by lack of capacity on the API side.
upvoted 1 times
...
...
Asagumo
1 year, 4 months ago
Selected Answer: D
The answer is D, but the SLA numbers do not matter. This existing system normally runs with 12 machines in a redundant configuration, so in the event of a failure, the system will run with 6 machines and process scheduled jobs at 100% occupancy, giving priority to SLAs. In other words, even after migrating to EC2 instances, it is only necessary to be able to run 6 instances for scheduled jobs.
upvoted 1 times
...
Asagumo
1 year, 4 months ago
Selected Answer: A
The problem statement "clients can tolerate retries of unsuccessful calls" can be interpreted as allowing end users to wait indefinitely. On the other hand, the problem statement "the errors are displayed to customers and are causing damage" can be interpreted to mean that the error page should be made to not appear. If these are satisfied, it is A.
upvoted 6 times
...
mfsec
1 year, 5 months ago
Selected Answer: B
B is correct
upvoted 1 times
...
God_Is_Love
1 year, 6 months ago
Logical answer : While catching errors and showing nice error message is good for customers, it still does damage to API as clients think API is not working/responding well. Retry and showing nice error will still invoke frustration to clients and damage to API :-) As the api is being bombarded with small number of clients (note they are successfully authenticated already trying to update resources with PUT) so assuming they are just bombarding with 429 too many requests. So API throttling helps. Caching may give stale data (C is not apt here) Reserved concurrency when lambda is overloaded (D is not a fit either). B should be correct
upvoted 2 times
...
Mahakali
1 year, 6 months ago
Selected Answer: B
API throttling helps
upvoted 1 times
...
c73bf38
1 year, 6 months ago
Selected Answer: B
Exponential backoff is a boto3 client retry logic that will impact all clients. The question is stating it's one client causing the issue, so A is not the correct choice. B as API Gateway can throttle the requests and handle the error pages correctly.
upvoted 2 times
...
jaysparky
1 year, 6 months ago
It is B. Don't think PUT Method should be cached.
upvoted 2 times
...
zozza2023
1 year, 6 months ago
Selected Answer: B
The problem is that an error is displayed==>solution API throttling
upvoted 2 times
...
vsk12
1 year, 7 months ago
Selected Answer: A
Option B is wrong as API throttling would be applied to all the customers.
upvoted 3 times
Sarutobi
1 year, 6 months ago
It can be applied to requests with specific API key.
upvoted 3 times
...
...
masetromain
1 year, 8 months ago
Selected Answer: A
I go with A: https://www.examtopics.com/discussions/amazon/view/69110-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 2 times
masetromain
1 year, 7 months ago
Implementing retry logic with exponential backoff and irregular variation in the client application can be a good way to improve the reliability of the application and reduce errors, but it does not address the root cause of the problem, which is a large number of requests coming from a small number of clients. Retry logic with exponential backoff works by increasing the time between retries by a certain factor (e.g. doubling it) after each failed attempt. This can help to reduce the number of errors by giving the API time to recover from a high load. However, it does not address the issue of the high load itself. If the number of requests that a client is making is causing errors, retry logic alone may not be sufficient to resolve the issue.
upvoted 2 times
masetromain
1 year, 7 months ago
Handling errors with descriptive error messages can improve the customer experience, but it does not address the underlying problem of high number of requests from a small number of clients that causes errors. Throttling is a way to control the rate of requests to an API, which can help to reduce the number of errors. By limiting the number of requests that a client can make, throttling can help to reduce the high number of requests that is causing errors, and it addresses the root cause of the problem.
upvoted 2 times
...
...
...
Question #23 Topic 1

A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region.
A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed data for the duration of the 72-hour run.
Which solution will provide the LARGEST overall cost reduction while meeting these requirements?

  • A. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
  • B. Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enabled. Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch template. Use the EBS volume as the shared storage for the duration of the job. Detach the EBS volume when the job is complete
  • C. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using batch loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
  • D. Migrate the data from the existing shared file system to an Amazon S3 bucket. Before the job runs each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use the file gateway as the shared storage for the job. Delete the file gateway when the job is complete.
Reveal Solution Hide Solution   Discussion   50

Correct Answer: D 🗳️

Community vote distribution
A (86%)
14%

Question #24 Topic 1

A company is developing a new service that will be accessed using TCP on a static port. A solutions architect must ensure that the service is highly available, has redundancy across Availability Zones, and is accessible using the DNS name my.service.com, which is publicly accessible. The service must use fixed address assignments so other companies can add the addresses to their allow lists.
Assuming that resources are deployed in multiple Availability Zones in a single Region, which solution will meet these requirements?

  • A. Create Amazon EC2 instances with an Elastic IP address for each instance. Create a Network Load Balancer (NLB) and expose the static TCP port. Register EC2 instances with the NLB. Create a new name server record set named my.service.com, and assign the Elastic IP addresses of the EC2 instances to the record set. Provide the Elastic IP addresses of the EC2 instances to the other companies to add to their allow lists.
  • B. Create an Amazon ECS cluster and a service definition for the application. Create and assign public IP addresses for the ECS cluster. Create a Network Load Balancer (NLB) and expose the TCP port. Create a target group and assign the ECS cluster name to the NLCreate a new A record set named my.service.com, and assign the public IP addresses of the ECS cluster to the record set. Provide the public IP addresses of the ECS cluster to the other companies to add to their allow lists.
  • C. Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and register the EC2 instances with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name to the record set.
  • D. Create an Amazon ECS cluster and a service definition for the application. Create and assign public IP address for each host in the cluster. Create an Application Load Balancer (ALB) and expose the static TCP port. Create a target group and assign the ECS service definition name to the ALB. Create a new CNAME record set and associate the public IP addresses to the record set. Provide the Elastic IP addresses of the Amazon EC2 instances to the other companies to add to their allow lists.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

God_Is_Love
Highly Voted 1 year, 6 months ago
Logical answer : Non http port like TCP should hint to NLB immediately.(ALB does not fit here) Sharing IP address of EC2 is not apt whether it is from individual EC2 instances or those from ECS cluster.this eliminates A,B.D, infact the NLB's address which stays in front of / associates to ec2 instances need to be shared. So, only solution is C
upvoted 12 times
...
masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: C
A more appropriate solution would be option C. Create an Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and register the EC2 instances with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name to the record set. As it uses the NLB as the resource in the A-record, traffic will be routed through the NLB, and it will automatically route the traffic to the healthy instances based on the health checks and also it provides the fixed address assignments as the other companies can add the NLB's Elastic IP addresses to their allow lists.
upvoted 6 times
...
Ashu_0007
Most Recent 1 week, 4 days ago
Ec2+NLB
upvoted 1 times
...
Alawi_Amazon_AWS
4 months ago
A looks ok https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: C
C: NLB with elastic IPs
upvoted 1 times
...
Vaibs099
7 months ago
C is the right answer - Few key points - TCP static Port (go with NLB), IP Whitelisting required which can only be done with NLB. ALB doesn't support static IPs. And sharing Static (Elastic) IPs of instances of no use when using NLB. We need to share NLB Elsatic IPs from Multi AZs and create DNS record for NLB Domain Name to Domain entry.
upvoted 1 times
...
sammyhaj
7 months, 3 weeks ago
https://repost.aws/knowledge-center/elb-attach-elastic-ip-to-public-nlb
upvoted 1 times
...
Simon523
11 months, 3 weeks ago
Selected Answer: C
Other option haven't mention multi AZ
upvoted 1 times
...
Christina666
1 year, 1 month ago
Selected Answer: C
Static IP-> NLB
upvoted 1 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: C
I suppose C, although you can;'t do this with A record, only alias
upvoted 1 times
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: C
Create one Elastic IP address for each Availability Zone.
upvoted 2 times
...
AWS_Sam
1 year, 3 months ago
C is the only option that talks about more than one AZ.
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: C
Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone.
upvoted 2 times
...
kiran15789
1 year, 5 months ago
Selected Answer: C
IP address using NLB
upvoted 1 times
...
saurabh1805
1 year, 6 months ago
Selected Answer: C
C looks correct.
upvoted 2 times
...
zozza2023
1 year, 6 months ago
Selected Answer: C
C. NLB with one Elastic IP per AZ to handle TCP traffic. Alias record set named my.service.com. https://www.examtopics.com/discussions/amazon/view/28045-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 1 times
...
Musk
1 year, 7 months ago
Selected Answer: C
C looks correct. I did not read the rest.
upvoted 1 times
...
masetromain
1 year, 8 months ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/28045-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 2 times
...
Question #25 Topic 1

A company uses an on-premises data analytics platform. The system is highly available in a fully redundant configuration across 12 servers in the company’s data center.
The system runs scheduled jobs, both hourly and daily, in addition to one-time requests from users. Scheduled jobs can take between 20 minutes and 2 hours to finish running and have tight SLAs. The scheduled jobs account for 65% of the system usage. User jobs typically finish running in less than 5 minutes and have no SLA. The user jobs account for 35% of system usage. During system failures, scheduled jobs must continue to meet SLAs. However, user jobs can be delayed.
A solutions architect needs to move the system to Amazon EC2 instances and adopt a consumption-based model to reduce costs with no long-term commitments. The solution must maintain high availability and must not affect the SLAs.
Which solution will meet these requirements MOST cost-effectively?

  • A. Split the 12 instances across two Availability Zones in the chosen AWS Region. Run two instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run four instances in each Availability Zone as Spot Instances.
  • B. Split the 12 instances across three Availability Zones in the chosen AWS Region. In one of the Availability Zones, run all four instances as On-Demand Instances with Capacity Reservations. Run the remaining instances as Spot Instances.
  • C. Split the 12 instances across three Availability Zones in the chosen AWS Region. Run two instances in each Availability Zone as On-Demand Instances with a Savings Plan. Run two instances in each Availability Zone as Spot Instances.
  • D. Split the 12 instances across three Availability Zones in the chosen AWS Region. Run three instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run one instance in each Availability Zone as a Spot Instance.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
D (91%)
9%

_lasco_
Highly Voted 1 year, 6 months ago
Selected Answer: D
Voted D because of the 65% / 35% proportion. C seems to be good but with only 50% instances available we break the SLA
upvoted 24 times
...
joefromnc
Highly Voted 12 months ago
Can not be C because Savings Plans requirement long term commitment.
upvoted 7 times
...
Helpnosense
Most Recent 2 months, 1 week ago
Selected Answer: C
I vote C. The 65% of scheduled jobs is the portion of the total work load. I don't believe it's SLA since SLA will be 99.99% or more. The jobs is hourly from 0.3 to 2 hours. There are 12 servers on prem. If the number of jobs per server can handle is N. Then to cover the worst situation that all the jobs run 2 hours, by given 12 servers and tight SLA, the number of hourly jobs is 12 / 2 = 6N. Answer C has 6 servers and since the number of job per server is N then 6 server can handle 6N jobs match the hourly job number 6N. 2 ec2 with saving plan + 2 spot instances is more cost effective than 3 ec2 with capacity plan(not saving a penny by capacity reservation plan) + 1 spot instance.
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: D
D is more cost-effective than C
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: D
Option A - This option might not work: it might not provide sufficient processing capacity for the batch jobs to meet the SLAs during outages. Moreover, 4 servers will not provide sufficient capacity to meet the SLAs of batch jobs Option B - This option might not work: In case of an outage affecting the On-Demand instances there might not be enough processing capacity to meet batch job SLAs Option C - This option will not meet the requirement not to make any long-term commitments Option D - This option will work: There is enough sufficient processing capacity to meet the SLAs of batch jobs and keep processing One-off jobs
upvoted 2 times
...
subbupro
8 months, 3 weeks ago
D would be perfect, because it requires more cpu usage, we should have more capacity CPU .
upvoted 1 times
...
edder
9 months ago
Selected Answer: D
The answer is D. Since it originally had a completely redundant configuration, it is thought that scheduled tasks are executed on 4 machines and user tasks are executed on 2 machines. A,B: Requirements cannot be met when a specific region falls. C: No Savings Plan required. D: Even if a specific region goes down, 6 machines will be maintained, so service can be maintained.
upvoted 1 times
...
Russs99
1 year ago
Selected Answer: D
About 65% or about 8 instances have to run at the same time to meet the SLA.
upvoted 3 times
...
ggrodskiy
1 year ago
Correct C. Option D is incorrect because running three instances in each Availability Zone as On-Demand Instances with Capacity Reservations will increase the cost of the solution without providing any additional benefit. Capacity Reservations are not necessary when using a Savings Plan, as they both offer guaranteed capacity at a discounted pricehttps://docs.aws.amazon.com/whitepapers/latest/how-aws-pricing-works/amazon-ec2.html. Also, running only one instance in each Availability Zone as a Spot Instance will not provide enough capacity for the user jobs that account for 35% of system usage.
upvoted 4 times
joefromnc
12 months ago
Can’t be C it says it can’t require long term commitment. Savings plans like reserved instance require long term commitments with a contract.
upvoted 3 times
...
...
awsrd2023
1 year, 1 month ago
Selected Answer: D
D. 3 AZ (Redundancy), 3 EC2 in each AZ as on demand and 1 spot (addresses SLA in 65/35 ratio) Ruling out Factors: A. Only 2 AZ (low redundancy), all EC2 in capacity reservation (Not Cost effective as SLA requirement is in 65/35 ratio). B. All 4 on-demand in 1 AZ (low redundancy), rest spot (Will efect tight SLA - is actually 35/65 instead of 65/35). C. Savings Plan (Against no long term commitments requirement).
upvoted 3 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: D
D 1 - need capacity reservation 2 - need to cover 65% with HA
upvoted 1 times
...
aca1
1 year, 3 months ago
Selected Answer: D
Just D is the right one. We need to garantee 65% (about 8 instances of 12) of capacity for the SLA, so 9 can do it and then let the others as spot. Another point Saving Plans need commitment "Savings Plans are a flexible pricing model that offer low prices on Amazon EC2, AWS Lambda, and AWS Fargate usage, in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term" - https://aws.amazon.com/savingsplans/compute-pricing/
upvoted 3 times
...
gameoflove
1 year, 3 months ago
Selected Answer: C
Voted C, the reason for this option is Spot Instance which is truely cost saving when we are performing Batch jobs and if you plan the cost properly this is best solution
upvoted 1 times
...
Maria2023
1 year, 4 months ago
Selected Answer: D
65% SLA can be reached only on answer D. Yeah - 9 instances are a bit too much but that's the only answer that meets the SLA
upvoted 1 times
...
rxhan
1 year, 4 months ago
Selected Answer: D
Option D splits the 12 instances across three AZs and runs three instances in each AZ as On-Demand Instances with Capacity Reservations, and one instance in each AZ as a Spot Instance. This option can provide better redundancy and capacity for scheduled jobs while still providing some cost savings through Spot Instances. Additionally, the user jobs can be easily absorbed by the available Spot Instances during On-Demand Instance failures.
upvoted 4 times
...
asifjanjua88
1 year, 4 months ago
Option C as per ChatGPT
upvoted 2 times
rxhan
1 year, 4 months ago
ChatGPT gave me option D
upvoted 3 times
...
fig
9 months, 4 weeks ago
This is proof that ChatGPT does make mistakes! Savings plans are 1 year or 3 year commitments. So C is incorrect.
upvoted 2 times
...
...
Amac1979
1 year, 5 months ago
Selected Answer: D
12 nodes in redundant configuration ..Means 6 nodes can handle load at any given time. Out of 6 nodes, 65 % is SLA driven (~4nodes) and 35% load can be paused. This lead to 4 nodes with single point of failure. D- If one -az down you still have 4 nodes available.
upvoted 3 times
...
mfsec
1 year, 5 months ago
Selected Answer: D
...Run one instance in each Availability Zone as a Spot Instance.
upvoted 2 times
...
higashikumi
1 year, 5 months ago
The solution that meets the requirements most cost-effectively is Split the 12 instances across three Availability Zones in the chosen AWS Region. Run two instances in each Availability Zone as On-Demand Instances with a Savings Plan. Run two instances in each Availability Zone as Spot Instances.
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: D
D -> No long term commitment. Please hourly jobs require 65% capacity
upvoted 1 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: C
I can’t understand people who voted D.. Capacity Reserved instances are very expensive and have the same price of on-demand so it’s not a “cost-effectively“ solution . C is the most cost effectively solution that also makes sense.
upvoted 1 times
NPN
1 year, 5 months ago
Option-C uses savings plan and needs commitment; The question says no long-term commitment; Hence option-D is the best.
upvoted 7 times
...
...
sambb
1 year, 6 months ago
Selected Answer: D
D has no long term commitment (e.g. saving plans) and has 75% on demand instances / 25% spot instances which is near the requirements
upvoted 2 times
...
cudbyanc
1 year, 6 months ago
Option D is also a good solution because it splits the 12 instances across three Availability Zones and uses a mix of On-Demand Instances with Capacity Reservations and Spot Instances. However, it allocates fewer On-Demand Instances than Option C, which could result in lower availability.
upvoted 1 times
...
cudbyanc
1 year, 6 months ago
Selected Answer: C
C is a good solution because it splits the 12 instances across three Availability Zones and uses a mix of On-Demand Instances with a Savings Plan and Spot Instances. On-Demand instances provide a consumption-based model with no long-term commitments, which is one of the requirements mentioned in the scenario. Although other purchasing options such as Reserved Instances or Savings Plans could offer significant discounts over On-Demand pricing, they require longer commitments and upfront payments, which may not align with the requirement of a consumption-based model with no long-term commitments. Additionally, using On-Demand instances can help to maintain high availability and meet the tight SLAs required for the scheduled jobs, as they provide the fastest and most reliable way to provision EC2 instances.
upvoted 2 times
frfavoreto
1 year, 4 months ago
'Reserved Instances' model does not any commitment or upfront payments: "You can create Capacity Reservations at any time, without entering into a one-year or three-year term commitment." "When you no longer need the capacity assurance, cancel the Capacity Reservation to release the capacity and to stop incurring charges" "When you run an instance that matches the attributes of a reservation, you just pay for the instance and nothing for the reservation. There are no upfront or additional charges. " https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html
upvoted 1 times
...
...
hobokabobo
1 year, 6 months ago
Selected Answer: D
We have an SLA to meet, that cannot be guaranteed with spot instances. We need to ensure that 65% of capacity is always available. The only option that has at least 65% capacity always available is D. Other options may be cheaper but do not provide the required Service Level.
upvoted 1 times
...
kiran15789
1 year, 6 months ago
Selected Answer: D
" with no long-term commitments." -> option c require atleast 1=3 years of commitments, so we can ignore it. So D is the best option
upvoted 1 times
...
God_Is_Love
1 year, 6 months ago
Selected Answer: C
Logical answer : A and B gets eliminated because one says two AZs and other is wierd proportion of 4 OnDemand, rest Spot instances. that leaves C and D. Most might go for D thinking 65-35 proportion but question asks for MOST cost effective which is option with Savings plans and its just 1 year commitment [its not really long term] (https://aws.amazon.com/savingsplans/compute-pricing/) In fact one standing out in this aspect is only C. Two OnDemand with savings plan saves and Two Spot instances save costs too. Win win situation and we have this same proportion in other two AZs as well, good for High Availability. So, I choose C.
upvoted 1 times
zejou1
1 year, 5 months ago
without knowing what the company considers "long-term" we cannot make that assumption. Yes, I leaned to it at first but reviewing the statement "which solution will meet these requirements most cost-effectively?" they don't want a commitment at all.
upvoted 1 times
...
...
God_Is_Love
1 year, 6 months ago
Logical answer : A and B gets eliminated because one says two AZs and other is wierd proportion of 4 OnDemand, rest Spot instances. that leaves C and D. Most might go for D thinking 65-35 proportion but question asks for MOST cost effective which is option with Savings plans and its just 1 year commitment [its not really long term] (https://aws.amazon.com/savingsplans/compute-pricing/) In fact one standing out in this aspect is only C. Two OnDemand with savings plan saves and Two Spot instances save costs too. Win win situation and we have this same proportion in other two AZs as well, good for High Availability. So, I choose C.
upvoted 1 times
...
Amac1979
1 year, 6 months ago
C Savings plans are 60-75% savings, capacity reservations guarantee the capacity (no savings).
upvoted 1 times
...
zozza2023
1 year, 6 months ago
Selected Answer: D
SLA looks like 65%
upvoted 1 times
...
Pugsley
1 year, 7 months ago
Selected Answer: D
The math is more logical for D - look at the 65% vs 35%.
upvoted 1 times
...
masetromain
1 year, 7 months ago
Selected Answer: D
Option D is correct because it meets the requirements of maintaining high availability, meeting SLAs for scheduled jobs, and reducing costs with a consumption-based model. By splitting the 12 instances across three Availability Zones, the system can maintain high availability and availability of resources in case of a failure. Option D also uses a combination of On-Demand Instances with Capacity Reservations and Spot Instances, which allows for scheduled jobs to be run on the On-Demand instances with guaranteed capacity, while also taking advantage of the cost savings from Spot Instances for the user jobs which have lower SLA requirements.
upvoted 2 times
...
Vicious000
1 year, 7 months ago
I think is D since it says most cost effective
upvoted 1 times
...
masetromain
1 year, 8 months ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/89276-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 3 times
...
zhangyu20000
1 year, 8 months ago
D is correct, other options has no more than 50% compute, less than required 65.
upvoted 3 times
...
Question #26 Topic 1

A security engineer determined that an existing application retrieves credentials to an Amazon RDS for MySQL database from an encrypted file in Amazon S3. For the next version of the application, the security engineer wants to implement the following application design changes to improve security:
The database must use strong, randomly generated passwords stored in a secure AWS managed service.
The application resources must be deployed through AWS CloudFormation.
The application must rotate credentials for the database every 90 days.
A solutions architect will generate a CloudFormation template to deploy the application.
Which resources specified in the CloudFormation template will meet the security engineer’s requirements with the LEAST amount of operational overhead?

  • A. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Specify a Secrets Manager RotationSchedule resource to rotate the database password every 90 days.
  • B. Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Create an AWS Lambda function resource to rotate the database password. Specify a Parameter Store RotationSchedule resource to rotate the database password every 90 days.
  • C. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Create an Amazon EventBridge scheduled rule resource to trigger the Lambda function password rotation every 90 days.
  • D. Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Specify an AWS AppSync DataSource resource to automatically rotate the database password every 90 days.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (100%)

Untamables
Highly Voted 1 year, 8 months ago
Selected Answer: A
A https://docs.aws.amazon.com/secretsmanager/latest/userguide/cloudformation.html Option B is wrong. The ParameterStore::RotationSchedule resource does not exist in CloudFormation. Option C is wrong. It does not meet the requirement because it does not use CloudFormation. Option D is wrong. The AWS::AppSync::DataSource resource is what to create data sources for resolvers in AWS AppSync to connect to.
upvoted 17 times
OnePunchExam
1 year, 4 months ago
Agree with A but I want to nitpick on this reply "The ParameterStore::RotationSchedule resource does not exist in CloudFormation". It is technically more correct to say ParameterStore does not support automated rotation of secrets instead of saying ParameterStore::RotationSchedule is not supported by CF.
upvoted 9 times
...
...
karma4moksha
Highly Voted 1 year, 3 months ago
Ans A but answer is badly phrased. Why is the Lambda needed ? Refer docs: Some services offer managed rotation, where the service configures and manages rotation for you. With managed rotation, you don't use an AWS Lambda function to update the secret and the credentials in the database. The following services offer managed rotation: Amazon RDS offers managed rotation for master user credentials. For more information, see Password management with Amazon RDS and AWS Secrets Manager in the Amazon RDS User Guide.
upvoted 12 times
ftaws
7 months, 1 week ago
I agree with you. Secret Manager support to rotate credentials.
upvoted 3 times
...
...
MAZIADI
Most Recent 2 weeks, 1 day ago
Selected Answer: A
Secrets Manager ($$$): Automatic rotation of secrets with AWS Lambda // SSM Parameter Store ($): No secret rotation (can enable rotation using Lambda triggered by EventBridge) --> more overhead even if it is cheaper ==> Answer A
upvoted 1 times
...
ivarnarik1
3 months, 3 weeks ago
Correct Answer: A Cloudformation template::systems manager has no resource called: RotationSchedule. where as Cloudformation template::secrets manager Indeed has a resource called: RotationSchedule. Therefore the correct answer is A only.
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: A
A is the correct answer
upvoted 1 times
...
8608f25
6 months, 2 weeks ago
Selected Answer: A
Option A is the most straightforward and provides the least amount of operational overhead because it leverages AWS Secrets Manager’s native capabilities for secret rotation. This eliminates the need for custom rotation logic or external triggers for rotation, unlike the other options that either rely on AWS Systems Manager Parameter Store (which does not have built-in secret rotation capabilities like Secrets Manager) or require additional resources such as Amazon EventBridge or AWS AppSync for triggering rotations, which complicates the architecture and increases operational overhead. Therefore, Option A is the correct choice as it directly addresses all the specified requirements using the intended features of AWS services, ensuring security and efficiency with minimal operational complexity.
upvoted 3 times
...
AimarLeo
6 months, 3 weeks ago
OK.. A ..but.. lambda to rotate for Secret Managers ? it does rotation natively ! why is that
upvoted 3 times
...
atirado
8 months, 1 week ago
Selected Answer: A
Option A - This option will work: This option takes advantage of the Automatic Rotation feature in Secrets Manager which reduces operational overhead during secret rotation, i.e. CloudTrail will show a secret was rotated Option B - This option will not work: Parameter Store does not have a feature called RotationSchedule Option C - This option might work but increases overhead: Rotation will be triggered on the 90 day schedule but more work will be necessary to validate the secret was rotated and tested, i.e. CloudTrail logs will only show a lambda function was triggered Option D - This option will not work: Parameter Store does not have a feature called RotationSchedule
upvoted 3 times
...
shaaam80
8 months, 3 weeks ago
Selected Answer: A
Answer A. Password rotation -> Secrets Manager
upvoted 1 times
...
whenthan
11 months, 4 weeks ago
Selected Answer: A
Which resources specified in the CloudFormation template will meet the security engineer’s requirements with the LEAST amount of operational overhead? use https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-secretsmanager-rotationschedule.html
upvoted 1 times
...
SK_Tyagi
1 year ago
All - I feel the answer is A but why does it says Correct Answer "B" - What is the rationale behind B, can anyone explain. I am so confused??
upvoted 2 times
The answers shown as correct are almost never the right ones on these test dumps, just pay attention to what was most voted and the discussions in the comments
upvoted 4 times
...
...
chico2023
1 year ago
Selected Answer: A
Answer: A
upvoted 1 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: A
it's n A
upvoted 1 times
...
rtguru
1 year, 3 months ago
A poorly phrased but seems to be the best option in this scenario
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: A
AWS Secret Manager is the best option for Password safety and option fulfill all the requirement
upvoted 1 times
...
chiplyti
1 year, 4 months ago
Selected Answer: A
A correct
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: A
Secrets Manager RotationSchedule resource
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: A
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_managed.html
upvoted 1 times
...
_lasco_
1 year, 6 months ago
Selected Answer: A
voted A, rotation with secrets manager: https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_managed.html
upvoted 1 times
...
cudbyanc
1 year, 6 months ago
Selected Answer: A
The best solution is either A or C, but A may be the LEAST amount of operational overhead since it uses AWS Secrets Manager's built-in rotation functionality.
upvoted 3 times
...
God_Is_Love
1 year, 6 months ago
Logical answer : Secrets manager only can support password rotation, not parameter store. Parameter store is just a location as its name suggest to refer to or be referred from elsewhere. B,D are eliminated.C is wrong because, there is no necessity for event bridge rule to capture known 90 days trigger. Rotation schedule is already available when you configure a secret in Secrets manager. That leaves option A as correct
upvoted 3 times
...
zozza2023
1 year, 6 months ago
Selected Answer: A
Secrets Manager support RotationSchedule.
upvoted 1 times
...
Musk
1 year, 7 months ago
Selected Answer: A
Option B is not wrong, but it has more operational overhead compared to option A. Option B uses AWS Systems Manager Parameter Store, which is less automated and requires manual intervention to perform password rotation. Option A uses AWS Secrets Manager, which provides a built-in mechanism to rotate secrets, reducing operational overhead.
upvoted 1 times
...
masetromain
1 year, 7 months ago
Selected Answer: A
Option A is the correct answer because it meets the security engineer's requirements with the least amount of operational overhead. This option uses AWS Secrets Manager to generate the database password as a secret resource, which is a secure and managed service for storing and rotating secrets such as database credentials. The CloudFormation template also includes a Lambda function resource to rotate the password, and a Secrets Manager RotationSchedule resource to schedule the password rotation every 90 days. This option is the correct answer because it is the best way to manage the password rotation, Secrets Manager is a fully managed service that encrypts and stores the credentials and rotates the credentials automatically, and CloudFormation is used to automate the deployment of the resources.
upvoted 3 times
...
robertohyena
1 year, 8 months ago
Selected Answer: A
Secrets Manager support RotationSchedule. Not ParameterStore.
upvoted 4 times
...
masetromain
1 year, 8 months ago
Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/47127-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 2 times
...
nyunyu
1 year, 8 months ago
A https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-secretsmanager-rotationschedule.html
upvoted 2 times
...
zhangyu20000
1 year, 8 months ago
C is correct - https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
upvoted 2 times
Cloud_noob
1 year, 6 months ago
Appreciate your participation in the discussions. However, I suggest do a proper research before voicing out your opinion.
upvoted 1 times
...
...
Question #27 Topic 1

A company is storing data in several Amazon DynamoDB tables. A solutions architect must use a serverless architecture to make the data accessible publicly through a simple API over HTTPS. The solution must scale automatically in response to demand.
Which solutions meet these requirements? (Choose two.)

  • A. Create an Amazon API Gateway REST API. Configure this API with direct integrations to DynamoDB by using API Gateway’s AWS integration type.
  • B. Create an Amazon API Gateway HTTP API. Configure this API with direct integrations to Dynamo DB by using API Gateway’s AWS integration type.
  • C. Create an Amazon API Gateway HTTP API. Configure this API with integrations to AWS Lambda functions that return data from the DynamoDB tables.
  • D. Create an accelerator in AWS Global Accelerator. Configure this accelerator with AWS Lambda@Edge function integrations that return data from the DynamoDB tables.
  • E. Create a Network Load Balancer. Configure listener rules to forward requests to the appropriate AWS Lambda functions.
Reveal Solution Hide Solution

Correct Answer: CD 🗳️

Community vote distribution
AC (81%)
Other

Untamables
Highly Voted 1 year, 8 months ago
Selected Answer: AC
A and C. API Gateway REST API can invoke DynamoDB directly. https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-overview-developer-experience.html
upvoted 27 times
ixdb
8 months, 2 weeks ago
CD is right. While this option A works for private access, it does not support public access as DynamoDB tables are not publicly accessible by default.
upvoted 2 times
Impromptu
8 months, 1 week ago
Option A has the ability to specify an execution role. This IAM role should have the GetItem/PutItem permissions for the given DynamoDB table. That way you can have access to your private table via the DynamoDB API while your API Gateway is publicly available. So I agree with A and C
upvoted 2 times
...
...
jpa8300
8 months ago
You cannot choose A and C, you choose A OR C, one excludes the other. When a question says to choose two answers, one shall complement the other. I agree that the API can integrate directly with DynamoDB, but if I have to choose two answers that complement each other, the A option cannot go with any of the others. Saying that, the only possible choices should be C and D, you create the Lambda functions to integrate with Dynamodb and then deploy them at Edge, as extra to improve performance and latency you use Global Accelerator. Yes, it is true that this is not a requirement, but it is good to have.
upvoted 4 times
...
...
atirado
Highly Voted 8 months, 1 week ago
Selected Answer: AC
Option A - This option might work: REST APIs can run over HTTPS and the integration type DynamoDB is possible Option B - This option will not work: HTTP APIs do not support integration types for DynamoDB Option C - This option will work: HTTP APIs support integration with Lambda functions Option D - This option will not work: Lambda@Edge is a function of CloudFront Option E - This option will not work: NLB Target groups can target Lambda functions however NLBs are not a Serverless solution (They are deployed on VPCs).
upvoted 10 times
...
jyrajan69
Most Recent 4 days ago
On the fact of simplicity it looks like BC, but with C there is an issue of Lambda fetching data, question does not indicate fetching, only put. So it looks like AB
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: AC
A and C
upvoted 1 times
...
Russs99
5 months, 1 week ago
Selected Answer: CD
The solutions that meet the requirements of using a serverless architecture to make the data accessible publicly through a simple API over HTTPS and scaling automatically in response to demand are: C AND D
upvoted 1 times
Russs99
5 months, 1 week ago
Actually, Option D is out, reason: you cannot use AWS Lambda@Edge with Global accelerator
upvoted 1 times
...
...
JOKERO
5 months, 3 weeks ago
a, c https://medium.com/brlink/rest-api-just-with-apigateway-and-dynamodb-8a9b0cd76b7a
upvoted 1 times
...
anubha.agrahari
5 months, 3 weeks ago
Selected Answer: A
API Gateway REST API can invoke DynamoDB directly.
upvoted 1 times
...
DmitriKonnovNN
6 months, 2 weeks ago
Sometimes when multiple answers are required, they're supposed to complement each other, but sometimes these have to be just 2 valid but independent solutions... Well API GW with Rest endpoint is a valid solution, since it's had DynamoDB proxy integration lately. We use it in production, and it's a good fit, if you want to have a lot of control and features in your API GW and no lambda functions in between, reason being VTL supports a big set of mutations which is enough to us. On the flip side, since we're forced to use a combination, then CD is the right answer. In terms of simplicity, it is the question, what you consider simple. API GW REST endpoint is considered simple, because it provides caching, api keys, usage plans, rate limiting, authorization, deployment stages etc. out of the box. So the plethora of out-of-the-box features is rather simple than implementing them oneself.
upvoted 1 times
...
ninomfr64
8 months, 1 week ago
Selected Answer: BC
Not E as I think NLB listener rules don't provide the required capability to to forward requests to the appropriate Lambda (you need to have and ALB) Not D as Lambda@Edge is a CloudFront feature A, B and C they all works here however the question requires "a simple API over HTTPS". Both REST APIs and HTTP APIs are RESTful API products. REST APIs support more features than HTTP APIs, while HTTP APIs are designed with minimal features so that they can be offered at a lower price. Thus I would go for B and C
upvoted 1 times
ninomfr64
8 months, 1 week ago
My answer is wrong, double check that DynamoDB is not supported as first-class integration with API Gateway as per doc https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-aws-services-reference.html Thus the correct answer is A and C
upvoted 2 times
...
...
subbupro
8 months, 3 weeks ago
C and D is the correct option 1) C- Need server less architecture so need to use lamda function instead of REST API 2) D - Global accelerator work with lamda edge would be best the option compare to NLB for auto scale up and down. It has static address and fixed entry point if we deply multiple region.
upvoted 2 times
...
Hit1979
8 months, 4 weeks ago
Selected Answer: CE
REST API - is not simple and limitaiton around scability. NLB with listener rules can be used to forward request based on specified conditions to appropriate AWS lambda function
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: AC
lambda can have https endpoints available
upvoted 1 times
...
rodrod
11 months, 2 weeks ago
Selected Answer: BC
I've read similar questions previously, keyword is "simple API". REST API adds more features than HTTP API and is consider "more" complex. So it has to be HTTP just for that reason. You can use API Gateway (HTTP)->dynamodb: https://aws.amazon.com/fr/blogs/compute/using-amazon-api-gateway-as-a-proxy-for-dynamodb/ so B and C
upvoted 3 times
sonyaws
9 months ago
BC HTTP API support AWS Integrations + Simple https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html
upvoted 2 times
...
...
bur4an
12 months ago
Selected Answer: BC
B. Create an Amazon API Gateway HTTP API. Configure this API with direct integrations to DynamoDB by using API Gateway’s AWS integration type. C. Create an Amazon API Gateway HTTP API. Configure this API with integrations to AWS Lambda functions that return data from the DynamoDB tables. Options A, D, and E do not align with the requirements as well: A. Amazon API Gateway REST API with Direct DynamoDB Integration: While REST APIs could work, HTTP APIs are generally more lightweight and cost-effective. Also, direct integration with DynamoDB using REST APIs could be more complex to set up compared to HTTP APIs.
upvoted 3 times
...
Russs99
1 year ago
Selected Answer: AB
Option C suggests configuring an Amazon API Gateway HTTP API with integrations to AWS Lambda functions that return data from the DynamoDB tables. However, this approach would introduce unnecessary complexity and additional steps since it involves using AWS Lambda as a middle layer to fetch data from DynamoDB
upvoted 1 times
...
chico2023
1 year ago
Selected Answer: AC
Answer: A and C
upvoted 1 times
...
pupsik
1 year, 1 month ago
Selected Answer: AB
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html
upvoted 1 times
pupsik
1 year, 1 month ago
Ooops, it’s AC DynamoDb is not one of the supported services for HTTP API. https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-aws-services-reference.html
upvoted 1 times
...
...
NikkyDicky
1 year, 1 month ago
Selected Answer: AC
AC B is not supported by HTTP API GWY
upvoted 1 times
...
Jonalb
1 year, 2 months ago
Selected Answer: AC
AAAACCCC
upvoted 1 times
...
chathur
1 year, 2 months ago
Selected Answer: AC
HTTP API is a light-weighted REST API that only supports two types of backend, Lambda and HTTP while REST API supports three backend: Lambda, HTTP and AWS services (DynamoDB for example). Source: https://medium.com/@fengliplatform/api-gateway-talks-to-dynamodb-in-two-ways-f45356c87986 This is a tutorial with screenshots, which means A & C are doable
upvoted 4 times
ailves
1 year, 2 months ago
According to https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html, HTTP API also support AWS Services (like DynamoDB)
upvoted 1 times
ailves
1 year, 2 months ago
Really HTTP support: Lambda, HTTP backends
upvoted 1 times
...
...
...
mKrishna
1 year, 3 months ago
A & C. Serverless pattern diagrams at https://serverlessland.com/patterns?services=apigw%2Cdynamodb
upvoted 2 times
...
OnePunchExam
1 year, 4 months ago
Selected Answer: AC
A & C. A https://aws.amazon.com/blogs/compute/using-amazon-api-gateway-as-a-proxy-for-dynamodb/ C https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-dynamo-db.html Also do learn when to use API GW REST vs HTTP
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: AC
AC is a good fit
upvoted 2 times
...
mKrishna
1 year, 5 months ago
Ans is A & C Option B: HTTP APIs do not currently support integrations with DynamoDB, and therefore this solution would not work. Option D: AWS Global Accelerator and AWS Lambda@Edge, which both involve infrastructure management. Option E: NLB does not meet the requirement of being serverless.
upvoted 2 times
...
kiran15789
1 year, 5 months ago
Selected Answer: AC
going with A and C
upvoted 1 times
...
_lasco_
1 year, 6 months ago
Selected Answer: AC
I voted A and C Api gateway REST APis support direct integration with DynamoDb The same can be achieved with HTTP APIs using a lambda between the two
upvoted 2 times
...
Gabehcoud
1 year, 6 months ago
Think it should CD. snippet from the link https://aws.amazon.com/api-gateway/faqs/ below HTTP APIs are ideal for: Building proxy APIs for AWS Lambda or any HTTP endpoint Building modern APIs that are equipped with OIDC and OAuth 2 authorization Workloads that are likely to grow very large APIs for latency sensitive workloads REST APIs are ideal for: Customers looking to pay a single price point for an all-inclusive set of features needed to build, manage, and publish their APIs.
upvoted 1 times
...
God_Is_Love
1 year, 6 months ago
API Gateway is the solution for simple API. D is Cloudfront/Lambda@edge solution for faster response. Rcoequirement says API. So D gets eliminated. E is irrelevant of course. B is wrong because DynamoDB vs Dynamo DB.(no brainer) That leaves A and C as correct answers. (If question asks for more secure not exposing DynamoDB directly, I'd go for C)
upvoted 1 times
...
c73bf38
1 year, 6 months ago
Selected Answer: AC
To make the data accessible publicly through a simple API over HTTPS while using a serverless architecture, the recommended solutions are to use Amazon API Gateway with direct integrations to DynamoDB or with integrations to AWS Lambda functions. Option A is a valid solution. With a REST API, API Gateway can be configured with direct integrations to DynamoDB, which eliminates the need for a Lambda function. Option C is also a valid solution. With an HTTP API, API Gateway can be configured with integrations to AWS Lambda functions that return data from the DynamoDB tables. This solution provides more flexibility since Lambda can be used to customize the data returned from the DynamoDB tables before it is sent back to the client.
upvoted 2 times
...
zozza2023
1 year, 6 months ago
Selected Answer: AC
A and C are the correct answers.
upvoted 2 times
...
masetromain
1 year, 7 months ago
Selected Answer: AC
A and C are the correct answers. A. Create an Amazon API Gateway REST API. Configure this API with direct integrations to DynamoDB by using API Gateway’s AWS integration type. C. Create an Amazon API Gateway HTTP API. Configure this API with integrations to AWS Lambda functions that return data from the DynamoDB tables. By Using Amazon API Gateway, the solution will automatically scale in response to demand, and it will also provide a simple API over HTTPS. While using the Lambda function the data can be accessed from the DynamoDB tables.
upvoted 4 times
moota
1 year, 6 months ago
For A, this one to be specific https://aws.amazon.com/blogs/compute/using-amazon-api-gateway-as-a-proxy-for-dynamodb/
upvoted 1 times
...
...
eraser2021999
1 year, 7 months ago
Selected Answer: AC
Lambda@Edge is available for CloudFront and not for Global Accelerator.
upvoted 3 times
...
masetromain
1 year, 8 months ago
Selected Answer: CD
OK with CD https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-dynamo-db.html
upvoted 3 times
rodrod
11 months, 2 weeks ago
D is a distractor, you CAN'T use AWS Lambda@Edge with Global accelerator
upvoted 1 times
...
...
Question #28 Topic 1

A company has registered 10 new domain names. The company uses the domains for online marketing. The company needs a solution that will redirect online visitors to a specific URL for each domain. All domains and target URLs are defined in a JSON document. All DNS records are managed by Amazon Route 53.
A solutions architect must implement a redirect service that accepts HTTP and HTTPS requests.
Which combination of steps should the solutions architect take to meet these requirements with the LEAST amount of operational effort? (Choose three.)

  • A. Create a dynamic webpage that runs on an Amazon EC2 instance. Configure the webpage to use the JSON document in combination with the event message to look up and respond with a redirect URL.
  • B. Create an Application Load Balancer that includes HTTP and HTTPS listeners.
  • C. Create an AWS Lambda function that uses the JSON document in combination with the event message to look up and respond with a redirect URL.
  • D. Use an Amazon API Gateway API with a custom domain to publish an AWS Lambda function.
  • E. Create an Amazon CloudFront distribution. Deploy a Lambda@Edge function.
  • F. Create an SSL certificate by using AWS Certificate Manager (ACM). Include the domains as Subject Alternative Names.
Reveal Solution Hide Solution

Correct Answer: BCF 🗳️

Community vote distribution
CEF (47%)
BCF (35%)
Other

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: CEF
C: By creating an AWS Lambda function, the solution architect can use the JSON document to look up the target URLs for each domain and respond with the appropriate redirect URL. This way, the solution does not need to rely on a web server to handle the redirects, which reduces operational effort. E: By creating an Amazon CloudFront distribution, the solution architect can deploy a Lambda@Edge function that can look up the target URLs for each domain and respond with the appropriate redirect URL. This way, CloudFront can handle the redirection, which reduces operational effort. F: By creating an SSL certificate with ACM and including the domains as Subject Alternative Names, the solution architect can ensure that the redirect service can handle both HTTP and HTTPS requests, which is required by the company.
upvoted 34 times
Shahul75
1 year, 6 months ago
SAN cannot handle redirects. We need to do http - https
upvoted 1 times
...
masetromain
1 year, 7 months ago
A and B are not the right answer because they would require configuring and maintaining a web server to handle the redirects, which would increase operational effort. D is not the right answer because it would require creating an API Gateway API, which increases operational effort.
upvoted 6 times
Arnaud92
1 year, 5 months ago
Wrong for B, Lambda can be an ALB target, you do not need web server
upvoted 8 times
...
...
...
chathur
Highly Voted 1 year, 2 months ago
Selected Answer: BCF
If you go with a Cloudfront what is the origin? Lambda@edge is not origin. The function mentioned in C is Lambda and in E it says about Lambda@edge, which are two things. If you handle redirect from the Lambda@edge in CF there is no need of the Lambda you wrote in Answer C. MY Answer: Create an ALB with HTTP and HTTPS listeners (B), Use the TLS cert created in F for the HTTPS listener. As the backend for the ALB write a Lambda with endpoint mapping JSON (C) Is this full serverless? No, but you do not have to worry about scaling or operational overhead, AWS Handles everything for us.
upvoted 26 times
dubyaF
8 months ago
This is the only answer that is completed by using all three options selected BCF. F is mandatory to resolve the marketing domains URLs that are HTTPS. So B and C then work together to redirect to those URLs as a full solution like https://aws.amazon.com/ko/blogs/networking-and-content-delivery/automating-http-s-redirects-and-certificate-management-at-scale/ E may have partial potential to do something, but you have no origin with it - and what would the origin be? With BCF you hit the ALB get a redirect as a result of the Marketing URL and your done-- its a complete redirect solution which is what the whole requirement is.
upvoted 4 times
...
...
ry1999
Most Recent 6 days, 9 hours ago
Selected Answer: BCF
Initially, i thought E however, i realized that Lambda@Edge needs an origin to send traffic to. The purpose of Lambda on edge is to make adjustments to the request/response not to reroute like what we're trying to achieve. D is out because Amazon API Gateway does not support unencrypted (HTTP) endpoints. A is too much overhead BCF.
upvoted 1 times
...
Jason666888
3 weeks ago
Selected Answer: BCF
BCF (Application Load Balancer, AWS Lambda, ACM) is preferred for its simplicity, ease of setup, and cost predictability. It handles both HTTP and HTTPS traffic effectively with less operational complexity compared to the CloudFront and Lambda@Edge setup. CEF (AWS Lambda, CloudFront, ACM) is a powerful solution for global low-latency requirements but may introduce unnecessary complexity and higher costs for a simple redirection service.
upvoted 1 times
...
vip2
2 months ago
Selected Answer: BEF
Correct answer is BEF Main discussion here is Lambda@Edge can redirect viewers request to based on information in the request based on domain and path
upvoted 2 times
Reval
4 weeks, 1 day ago
B is not correct. While an ALB could handle HTTP/HTTPS requests, it still requires managing target groups and does not directly integrate with a simple Lambda function for routing.
upvoted 1 times
...
...
Bereket
2 months, 1 week ago
Selected Answer: CDF
CDF is answer. Check it again
upvoted 1 times
...
Helpnosense
2 months, 1 week ago
Selected Answer: BCF
Vote BCF
upvoted 1 times
...
QasimAWS
4 months ago
Guys the Answer has to be B,C, and E. period.
upvoted 1 times
...
Dgix
5 months, 1 week ago
Selected Answer: CEF
CEF is the correct combination.
upvoted 1 times
...
24Gel
5 months, 2 weeks ago
E is a bit blur, it seems like an unifished sentence to me
upvoted 1 times
...
atirado
8 months, 1 week ago
Selected Answer: BCF
Option A - This option could work but it increases operational overhead: Deploying an EC2 instance requires building a VPC with one public subnet. Moreover, the architect will need to write an application to process the event. Option B - This option could work and it reduces operational overhead: An ALB helps expose the solution and respond to HTTP/S requests; a VPC will needed. It can target EC2 instances and Lambda functions Option C - This option could work and minimizes operational overhead: The architect can focus on writing the code to process the event. A VPC is not necessarily needed to deploy a Lambda function Option D - This option might not work: AWS Gateway APIs only respond to HTTPS Option E - This option might not work: It can respond to HTTP/S requests and send events to API Gateway as an origin. However, this would remove the need to deploy a Lambda@Edge function Option F - This option will contribute to the solution: It enables HTTPS for the 10 domains
upvoted 4 times
...
jainparag1
9 months ago
Selected Answer: BEF
Lambda@Edge allows you to execute custom business logic closer to the viewer. This capability enables intelligent/programmable processing of HTTP requests at locations that are closer (for the purpose of latency) to your viewer. In this case the Lambda@Edge function can be written so that it redirects viewers based on information in the request based on domain and path. To accept multiple custom domains on the CloudFront distribution a certificate can be created in ACM that includes multiple subject alternative names. These names can then be used in Route 53 records pointing to the distribution. The ALB may will need to be configured with both an HTTP and an HTTPS listener. The HTTPS listener will also require a certificate, and this could use the same certificate used in the CloudFront distribution or it could be a separate certificate.
upvoted 5 times
jainparag1
9 months ago
INCORRECT: "Create a dynamic webpage and host it on an Amazon EC2 instance. Configure the webpage to use the JSON document in combination with the event message to look up and respond with a redirect URL" is incorrect. While designing such a solution, serverless should be utilized and hence EC2 isn’t an appropriate use case for this scenario. INCORRECT: "Create an AWS Lambda function that uses the event message and specified JSON document to look up and respond with a redirect URL" is incorrect. With this solution, Lambda would need a change every time config file changes and would increase effort, hence this is not an efficient option. INCORRECT: "Use an Amazon API Gateway API with a custom domain to publish an AWS Lambda function" is incorrect. With this option as well, for each domain addition or change, API gateway stages would have to be re deployed hence this is again an ineffective choice.
upvoted 3 times
jainparag1
9 months ago
References: https://aws.amazon.com/premiumsupport/knowledge-center/elb-redirect-to-another-domain-with-alb/ https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-how-it-works-tutorial.html Save time with our AWS cheat sheets: https://digitalcloud.training/amazon-cloudfront/
upvoted 3 times
...
...
...
severlight
9 months, 2 weeks ago
Selected Answer: CEF
as they fit each other
upvoted 1 times
...
Jay_2pt0_1
9 months, 3 weeks ago
B, E, F - Lambda@Edge will allow for processing before directing to ALB
upvoted 2 times
...
rlf
10 months, 3 weeks ago
BCF. We need to choose the solution with three steps and here is the blog with same situation with old possible scenario with limited scale ( S3, cloudfront without lambda@edge ) https://aws.amazon.com/ko/blogs/networking-and-content-delivery/automating-http-s-redirects-and-certificate-management-at-scale/
upvoted 3 times
...
vjp_training
11 months, 1 week ago
Selected Answer: BEF
trust me
upvoted 5 times
...
Simon523
11 months, 3 weeks ago
Selected Answer: CEF
E is correct, cause Lambda@Edge can redirect to a different URI. https://aws.amazon.com/tw/blogs/networking-and-content-delivery/handling-redirectsedge-part1/
upvoted 2 times
...
Greyeye
1 year ago
I thought about it, but I would pick C E F, so, lambda edge over ALB For ALB, you will have to have 10 rules created, each mapping to the Lambda as a trigger. For Cloudfront Lambda@edge, you just need to set up a distribution, point R53 to it, and let Lambda@Edge handle all the redirects.
upvoted 2 times
...
chico2023
1 year ago
Selected Answer: BCF
Answer: B, C and F.
upvoted 2 times
...
ggrodskiy
1 year ago
Correct BCF. Option E is incorrect because using an Amazon CloudFront distribution and a Lambda@Edge function is not suitable for this scenario. CloudFront is a content delivery network (CDN) that caches content at edge locations for faster delivery. Lambda@Edge allows you to run Lambda functions at the edge locations to customize the content delivery. However, in this case, you do not need to cache or customize any content, but simply redirect requests based on a JSON document. Using CloudFront and Lambda@Edge may add latency and cost to your solution.
upvoted 4 times
...
softarts
1 year ago
Selected Answer: BEF
correct answer is BEF explained in neal's practice test6,Q28
upvoted 5 times
softarts
1 year ago
Lambda@Edge allows you to execute custom business logic closer to the viewer. This capability enables intelligent/programmable processing of HTTP requests at locations that are closer (for the purpose of latency) to your viewer. In this case the Lambda@Edge function can be written so that it redirects viewers based on information in the request based on domain and path.
upvoted 3 times
softarts
1 year ago
To accept multiple custom domains on the CloudFront distribution a certificate can be created in ACM that includes multiple subject alternative names. These names can then be used in Route 53 records pointing to the distribution. The ALB may will need to be configured with both an HTTP and an HTTPS listener. The HTTPS listener will also require a certificate, and this could use the same certificate used in the CloudFront distribution or it could be a separate certificate.
upvoted 3 times
...
...
...
NikkyDicky
1 year, 1 month ago
Selected Answer: BCF
CEF . although BCF seems workable and low ope overhead too
upvoted 1 times
...
Parimal1983
1 year, 2 months ago
Selected Answer: BCF
ALB can support Lambda as a target, with SSL can support HTTPS along with HTTP, so these options make more logical and make sense. To process JSON document, we are using option C so option E will not be applicable.
upvoted 1 times
...
Maria2023
1 year, 2 months ago
Selected Answer: CEF
Hopefully that will do the job for CloudFront origin, since that was my main concern - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html#concept_lambda_function_url
upvoted 2 times
...
bcx
1 year, 2 months ago
All the responses that say that D (API Gateway) has nothing to do here because it is an API are wrong. API Gateway would be a valid solution for the redirects and calling Lambda. HOWEVER, the question says that the solution MUST accept HTTP and HTTPS. API Gateway in HTTPS-only.
upvoted 3 times
...
aca1
1 year, 3 months ago
Selected Answer: BCF
Should be B, C and F. I was in doubt about the ALB or CloudFront, but to use CloudFront you need a Oring (The Lambda@Edge is not the Orgin, if will work between the User and CloudFront or CloudFront and the Orgin), in this scenario you do not have a Oring, so using a CloudFront here is a incomplete solution.
upvoted 1 times
...
dev112233xx
1 year, 3 months ago
Selected Answer: CDF
after long investigating i vote: C,D,F API Gateway + Lambda a perfect serverless solution to redirect URLs Lambda just needs to return the URL with http code 301 Cloudfront: is mainly used for caching. so i don't like this solution ALB: i prefer API Gateway which is more light weight and faster and ofc it's serverless
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: CEF
C: By creating an AWS Lambda function, the solution architect can use the JSON document to look up the target URLs for each domain and respond with the appropriate redirect URL. This way, the solution does not need to rely on a web server to handle the redirects, which reduces operational effort. E: By creating an Amazon CloudFront distribution, the solution architect can deploy a Lambda@Edge function that can look up the target URLs for each domain and respond with the appropriate redirect URL. This way, CloudFront can handle the redirection, which reduces operational effort. F: By creating an SSL certificate with ACM and including the domains as Subject Alternative Names, the solution architect can ensure that the redirect service can handle both HTTP and HTTPS requests, which is required by the company.
upvoted 3 times
...
MikelH93
1 year, 3 months ago
Selected Answer: CEF
Firstly, need serverless service because "LEAST amount of operational effort" A Wrong because overhead B wro,ng because ALB not serverless C right because use lambda to redirect D No sense here, no need. E cloudfront is serverless and can handle http -> https and also handle lambda function close to user with Lambda@Edge F Need certificate to https
upvoted 2 times
...
Sarutobi
1 year, 4 months ago
Selected Answer: BCF
I will use ALB instead of CloudFront here, but both can work.
upvoted 2 times
y0eri
1 year, 3 months ago
No: https://stackoverflow.com/a/73395412
upvoted 3 times
Sarutobi
1 year, 3 months ago
Thank you so much for pointing out this link; if you scroll down to the end, there is another link to https://medium.com/trainingdock/http-redirects-with-lambda-c20cf7934060; the link provides the TF code to deploy and test. The only change I made was to manually create a cert for the resource `aws_lb_listener.https_listener`. The only reason I would go with B instead of C here is that B states that we have both HTTP and HTTPS listeners, while E does not clarify that (it can be configured to HTTPS only, although the default is HTTP/HTTPS).
upvoted 1 times
Sarutobi
1 year, 3 months ago
In my previous post, I said, "with B instead of C here is", that was wrong I meant to say "with B instead of *E* here is".
upvoted 1 times
...
...
...
...
asifjanjua88
1 year, 4 months ago
CORRECT: "Create an Application Load Balancer that includes HTTP and HTTPS listeners" is a correct answer (as explained above.) CORRECT: "Create an Amazon CloudFront distribution and deploy a Lambda@Edge function" is also a correct answer (as explained above.) CORRECT: "Create an SSL certificate by using AWS Certificate Manager (ACM). Include the domains as Subject Alternative Names" is also a correct answer (as explained above.)
upvoted 1 times
...
frfavoreto
1 year, 4 months ago
Selected Answer: CEF
A - too much operational overhead B - ALB is not necessary to redirect HTTP->HTTPS. Cloudfront does this. C - Lambda function is necessary here on the option 'E' D - API Gateway is completely unnecessary in this scenario E - Let CF handle HTTP->HTTPS redirects altogether with Lambda@edge to map domains to full URLs F - You need a single cert with multiple subject alternative names, as you have 10 different domains
upvoted 1 times
...
OnePunchExam
1 year, 4 months ago
Selected Answer: CEF
Key objective is LEAST amount of operational effort. When the question ask this kind of questions, try to look for serverless solutions. I think those who reject E are confusing implementation effort (complexity in writing lambda func if they do not have programming background) with operational effort.
upvoted 2 times
...
mKrishna
1 year, 5 months ago
Key point "LEAST amount of operational effort" ANS: B, C, D Option A is not a serverless solution and would require more operational effort to manage an EC2 instance. Option E is also a valid solution, but deploying a CloudFront distribution would introduce additional complexity and operational overhead. Option F is not necessary for this solution since the redirection is based on domain name and not SSL certificates.
upvoted 2 times
Jay_2pt0_1
1 year, 4 months ago
We need SSL for HTTPS though.
upvoted 2 times
...
...
c73bf38
1 year, 6 months ago
Options A, D, and E are not necessary for meeting the requirements and would add additional complexity and operational effort. Option A suggests creating a dynamic webpage that runs on an EC2 instance, which is unnecessary as the redirect can be handled by the ALB and Lambda function. Option D suggests using an Amazon API Gateway API with a custom domain to publish an AWS Lambda function, which adds additional complexity and operational effort. Option E suggests creating a CloudFront distribution and deploying a Lambda@Edge function, which is more complex than the solution described above and is not necessary for the given requirements.
upvoted 1 times
...
God_Is_Love
1 year, 6 months ago
Selected Answer: BCF
My Logical answer : CloudFront /edge services does not fit here on the requirement.E is not apt. Its for online marketing and all domains users need to be redirected. Redirect service steps are all asked for. Needs a Load balancer as front controller which accepts requests from all domains and SSL certificate is certainly needed..A is irrelevant as creating a single web page does not help in redirection. I go with BCF as correct
upvoted 2 times
Jay_2pt0_1
1 year, 4 months ago
I tend to agree with BCF, as well. I guess it could be CEF, though. I'm torn on this one.
upvoted 2 times
...
...
God_Is_Love
1 year, 6 months ago
My Logical answer : CloudFront /edge services does not fit here on the requirement.E is not apt. Its for online marketing and all domains users need to be redirected. Redirect service steps are all asked for. Needs a Load balancer as front controller which accepts requests from all domains and SSL certificate is certainly needed..A is irrelevant as creating a single web page does not help in redirection. I go with BCF as correct
upvoted 1 times
...
c73bf38
1 year, 6 months ago
Selected Answer: BCE
I choose B,C,E because the question is focused on implementing a redirect service. F will not work as it's for creating an SSL certificate, not creating the redirect service.
upvoted 3 times
c73bf38
1 year, 6 months ago
Step 1: Create an Application Load Balancer (ALB) that includes HTTP and HTTPS listeners. The ALB can be used to route incoming requests to the appropriate backend service, in this case, the AWS Lambda function. Step 2: Create an AWS Lambda function that uses the JSON document in combination with the event message to look up and respond with a redirect URL. We can use the ALB as a trigger for the Lambda function to process the incoming requests and return the appropriate redirect response. Step 3: Create an Amazon CloudFront distribution. We can use the ALB as the origin for the CloudFront distribution. This allows us to use the global edge network of CloudFront for faster and more reliable content delivery. We can also deploy a Lambda@Edge function to modify the response headers and redirect the incoming requests to the appropriate target URL.
upvoted 5 times
c73bf38
1 year, 6 months ago
Switching to F: Option F is also a valid solution to create an SSL certificate using AWS Certificate Manager (ACM) that includes the domains as Subject Alternative Names, allowing secure HTTPS connections.
upvoted 1 times
...
...
...
zozza2023
1 year, 6 months ago
Selected Answer: CEF
CEF are the answers
upvoted 3 times
...
Untamables
1 year, 8 months ago
Selected Answer: CEF
CEF The serverless architecture reduces operational overhead the most. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-generating-http-responses-in-requests.html https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html
upvoted 2 times
...
masetromain
1 year, 8 months ago
Selected Answer: CEF
https://www.examtopics.com/discussions/amazon/view/69017-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 3 times
...
Question #29 Topic 1

A company that has multiple AWS accounts is using AWS Organizations. The company’s AWS accounts host VPCs, Amazon EC2 instances, and containers.
The company’s compliance team has deployed a security tool in each VPC where the company has deployments. The security tools run on EC2 instances and send information to the AWS account that is dedicated for the compliance team. The company has tagged all the compliance-related resources with a key of “costCenter” and a value or “compliance”.
The company wants to identify the cost of the security tools that are running on the EC2 instances so that the company can charge the compliance team’s AWS account. The cost calculation must be as accurate as possible.
What should a solutions architect do to meet these requirements?

  • A. In the management account of the organization, activate the costCenter user-defined tag. Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management account. Use the tag breakdown in the report to obtain the total cost for the costCenter tagged resources.
  • B. In the member accounts of the organization, activate the costCenter user-defined tag. Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management account. Schedule a monthly AWS Lambda function to retrieve the reports and calculate the total cost for the costCenter tagged resources.
  • C. In the member accounts of the organization activate the costCenter user-defined tag. From the management account, schedule a monthly AWS Cost and Usage Report. Use the tag breakdown in the report to calculate the total cost for the costCenter tagged resources.
  • D. Create a custom report in the organization view in AWS Trusted Advisor. Configure the report to generate a monthly billing summary for the costCenter tagged resources in the compliance team’s AWS account.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (95%)
5%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: A
Answer A : because we do not depend on the users, I prefer management account Option C or A would be the correct answer. In option C, the solution architect would activate the costCenter user-defined tag in the member accounts of the organization, and then schedule a monthly AWS Cost and Usage Report from the management account to retrieve the reports and calculate the total cost for the costCenter tagged resources. In option A, the management account of the organization would activate the costCenter user-defined tag and configure monthly AWS Cost and Usage Reports to be saved to an Amazon S3 bucket in the management account. Then, use the tag breakdown in the report to obtain the total cost for the costCenter tagged resources. Both options would allow the company to accurately identify the cost of the security tools running on the EC2 instances and charge the compliance team’s AWS account.
upvoted 18 times
dkx
1 year, 2 months ago
Only a management account in an organization and single accounts that aren't members of an organization have access to the cost allocation tags manager in the Billing and Cost Management console. https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.html
upvoted 12 times
...
chathur
1 year, 2 months ago
User-defined tags can not be allowed from management accounts in AWS Organization. It must done from the management Account.
upvoted 2 times
Reval
1 month, 3 weeks ago
Did you mean from member account? in this sentence "User-defined tags can not be allowed from management accounts in AWS Organization."
upvoted 1 times
...
...
...
Untamables
Highly Voted 1 year, 8 months ago
Selected Answer: A
I vote A. https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.html https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/configurecostallocreport.html
upvoted 6 times
...
Jason666888
Most Recent 3 weeks, 2 days ago
Selected Answer: A
The most ideal way to get this job done is to use: AWS Cost Explorer But among all the given options, we should go with option A, as the user defined tag can only be managed in management account
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: A
A is correct
upvoted 1 times
...
subbupro
8 months, 3 weeks ago
A is ccorect, we need to login to management account to create
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: A
yes, you need to activate cost allocation tags before using, you can do this the same place where you would like to see your reports - management account
upvoted 2 times
...
whenthan
10 months, 1 week ago
Selected Answer: C
lines up correctly activate tag in member accounts and generating AWS CUR from management account ( has ability to see costs across all member accounts) and Tag breakfdown in report
upvoted 1 times
...
imvb88
11 months ago
Selected Answer: A
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/activating-tags.html "For tags to appear on your billing reports, you must activate them." https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.html "Only a management account in an organization and single accounts that aren't members of an organization have access to the cost allocation tags manager in the Billing and Cost Management console." -> eliminate B,C. D is not relevant
upvoted 2 times
...
whenthan
11 months, 3 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/building-a-cost-allocation-strategy.html
upvoted 1 times
...
bur4an
12 months ago
Selected Answer: A
Only a management account in an organization and single accounts that aren't members of an organization have access to the cost allocation tags manager in the Billing and Cost Management console.
upvoted 3 times
...
NikkyDicky
1 year, 1 month ago
it's an A
upvoted 1 times
...
rtguru
1 year, 3 months ago
I go with D
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: A
Cost center tag int he management account.
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: A
Management account for reports
upvoted 1 times
...
zozza2023
1 year, 6 months ago
Selected Answer: A
Answer A
upvoted 2 times
...
yimicc
1 year, 8 months ago
Selected Answer: C
Should be a C
upvoted 1 times
yimicc
1 year, 8 months ago
Change to A, the activation of user tag for billing can only be done by management account
upvoted 5 times
...
...
tman22
1 year, 8 months ago
A. You want the cost information across all accounts - So you use the management account.
upvoted 4 times
...
masetromain
1 year, 8 months ago
I want to answer C
upvoted 1 times
...
Question #30 Topic 1

A company has 50 AWS accounts that are members of an organization in AWS Organizations. Each account contains multiple VPCs. The company wants to use AWS Transit Gateway to establish connectivity between the VPCs in each member account. Each time a new member account is created, the company wants to automate the process of creating a new VPC and a transit gateway attachment.
Which combination of steps will meet these requirements? (Choose two.)

  • A. From the management account, share the transit gateway with member accounts by using AWS Resource Access Manager.
  • B. From the management account, share the transit gateway with member accounts by using an AWS Organizations SCP.
  • C. Launch an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a VPC transit gateway attachment in a member account. Associate the attachment with the transit gateway in the management account by using the transit gateway ID.
  • D. Launch an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a peering transit gateway attachment in a member account. Share the attachment with the transit gateway in the management account by using a transit gateway service-linked role.
  • E. From the management account, share the transit gateway with member accounts by using AWS Service Catalog.
Reveal Solution Hide Solution

Correct Answer: AC 🗳️

Community vote distribution
AC (100%)

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: AC
Option A is sharing the transit gateway with member accounts by using AWS Resource Access Manager, which allows the management account to share resources with member accounts. Option C is launching an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a VPC transit gateway attachment in a member account, and associates the attachment with the transit gateway in the management account by using the transit gateway ID. This automation of creating a new VPC and transit gateway attachment in new member accounts can help to streamline the process and reduce operational effort.
upvoted 21 times
jainparag1
9 months ago
Precisely!
upvoted 1 times
...
...
gofavad926
Most Recent 5 months, 1 week ago
Selected Answer: AC
AC are correct
upvoted 1 times
...
[Removed]
8 months, 3 weeks ago
Selected Answer: AC
I am working on a project doing the exact same thing :D
upvoted 2 times
...
rlf
10 months, 3 weeks ago
AC. https://aws.amazon.com/ko/blogs/networking-and-content-delivery/automating-aws-transit-gateway-attachments-to-a-transit-gateway-in-a-central-account/ https://cloudjourney.medium.com/aws-ram-and-transit-gateway-8ac230f298e8
upvoted 1 times
...
Simon523
11 months, 3 weeks ago
Selected Answer: AC
You can use AWS Resource Access Manager (RAM) to share a transit gateway for VPC attachments across accounts or across your organization in AWS Organizations.
upvoted 1 times
...
NikkyDicky
1 year, 1 month ago
AC of course
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: AC
AC are my choice.
upvoted 2 times
...
zozza2023
1 year, 6 months ago
Selected Answer: AC
A and C are the answer for me
upvoted 2 times
...
Untamables
1 year, 8 months ago
Selected Answer: AC
A & C https://docs.aws.amazon.com/vpc/latest/tgw/tgw-transit-gateways.html https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-transitgatewayattachment.html
upvoted 2 times
...
masetromain
1 year, 8 months ago
Selected Answer: AC
https://www.examtopics.com/discussions/amazon/view/60090-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 3 times
...
Question #31 Topic 1

An enterprise company wants to allow its developers to purchase third-party software through AWS Marketplace. The company uses an AWS Organizations account structure with full features enabled, and has a shared services account in each organizational unit (OU) that will be used by procurement managers. The procurement team’s policy indicates that developers should be able to obtain third-party software from an approved list only and use Private Marketplace in AWS Marketplace to achieve this requirement. The procurement team wants administration of Private Marketplace to be restricted to a role named procurement-manager-role, which could be assumed by procurement managers. Other IAM users, groups, roles, and account administrators in the company should be denied Private Marketplace administrative access.
What is the MOST efficient way to design an architecture to meet these requirements?

  • A. Create an IAM role named procurement-manager-role in all AWS accounts in the organization. Add the PowerUserAccess managed policy to the role. Apply an inline policy to all IAM users and roles in every AWS account to deny permissions on the AWSPrivateMarketplaceAdminFullAccess managed policy.
  • B. Create an IAM role named procurement-manager-role in all AWS accounts in the organization. Add the AdministratorAccess managed policy to the role. Define a permissions boundary with the AWSPrivateMarketplaceAdminFullAccess managed policy and attach it to all the developer roles.
  • C. Create an IAM role named procurement-manager-role in all the shared services accounts in the organization. Add the AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an organization root-level SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. Create another organization root-level SCP to deny permissions to create an IAM role named procurement-manager-role to everyone in the organization.
  • D. Create an IAM role named procurement-manager-role in all AWS accounts that will be used by developers. Add the AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an SCP in Organizations to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. Apply the SCP to all the shared services accounts in the organization.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (92%)
8%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: C
The most efficient way to design an architecture to meet these requirements is option C. By creating an IAM role named procurement-manager-role in all the shared services accounts in the organization and adding the AWSPrivateMarketplaceAdminFullAccess managed policy to the role, the procurement managers will have the necessary permissions to administer Private Marketplace. Then, by creating an organization root-level SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role and another organization root-level SCP to deny permissions to create an IAM role named procurement-manager-role to everyone in the organization, the company can restrict access to Private Marketplace administrative access to only the procurement managers.
upvoted 14 times
SK_Tyagi
1 year ago
The catch is the "Create an organization root-level SCP to deny permissions". I'd refrain from creating a root-level SCP
upvoted 3 times
...
...
MAZIADI
Most Recent 2 weeks, 1 day ago
Selected Answer: C
Not D, why ? : D. Placing the procurement-manager-role in developer accounts with full Private Marketplace admin access increases the risk of mismanagement. Additionally, applying an SCP only to shared services accounts does not adequately restrict access across the entire organization.
upvoted 1 times
...
cnethers
2 months, 1 week ago
Why C is right and D is wrong.... Focus on the end of the question : Other IAM users, groups, roles, and account administrators in the company should be denied Private Marketplace administrative access. What is the MOST efficient way to design an architecture to meet these requirements? Who should be excluded? Other IAM users, groups, roles, and account administrators in the company What is the MOST efficient way? Apply SCP at the root level D is more work than C, this is a good reason to choose C over D
upvoted 1 times
...
Chakanetsa
3 months, 3 weeks ago
Selected Answer: C
C. Most efficient and secure: Creating the procurement-manager-role in shared services accounts limits its scope to specific OUs, aligning with the organizational structure. Granting AWSPrivateMarketplaceAdminFullAccess to this role provides the necessary permissions for managing Private Marketplace within the OU. An organization root-level SCP denying Private Marketplace administration to everyone except the procurement-manager-role ensures centralized control and restricts unauthorized access. Another SCP preventing the creation of the procurement-manager-role outside of shared services accounts adds an extra layer of security.
upvoted 1 times
...
anubha.agrahari
5 months, 3 weeks ago
Selected Answer: C
C, D doesn't make sense.
upvoted 1 times
...
ninomfr64
8 months, 1 week ago
Selected Answer: C
Not A as it does not implement the requirement to enforce procurement managers to use the shared services account in each organizational unit Not B as this would allow developers to administer private market place not D as this would allow developers to administer private market place C is correct as it configure the required role (with required permission) only in the shared service account, uses an SCP to deny private market place management to to everyone except the role named procurement-manager-role and another SCP to prevent creating a role nmaed procurement-manager-role
upvoted 2 times
ninomfr64
8 months, 1 week ago
Actually D would to the job, but creating a role in every account is nt strictly necessary and would cause more work
upvoted 1 times
...
...
subbupro
8 months, 3 weeks ago
C is the better one than D . because we need to apply scp to the root level with deny policy is the best practices. create the role and apply to each account is not a correct way and it is overhead to the adminstrator.
upvoted 2 times
...
severlight
9 months, 2 weeks ago
Selected Answer: C
look on whenthan's answer
upvoted 1 times
...
whenthan
10 months, 1 week ago
Selected Answer: C
creation of role in all shared services adding required policy to the role creation of org root-level to guardrail who can have those privileges creation of SCP to close out workaround of creation of another role with same access
upvoted 3 times
...
Tarun4b7
11 months ago
Selected Answer: D
C and D options are most relevant. Once you create a role, you cannot create another role with same name. So option C doesn't make sense. So my answer Option D
upvoted 2 times
_Jassybanga_
6 months, 2 weeks ago
i am on same page
upvoted 1 times
_Jassybanga_
6 months, 2 weeks ago
its C - the role should be in shared service accounts and not all accounts
upvoted 1 times
...
...
...
qxy
11 months, 3 weeks ago
Selected Answer: C
Clearly, it's C.
upvoted 1 times
...
Karamen
1 year ago
Selected answer: C option D: "Create an IAM role named procurement-manager-role in all AWS accounts that will be used by developers", the procurement-manager-role is used by manager not used by developers
upvoted 2 times
alicewsm
10 months, 1 week ago
the first sentense "An enterprise company wants to allow its developers to purchase third-party software through AWS Marketplace."
upvoted 1 times
jainparag1
9 months ago
Developers has to ask procurement manager and not purchase by themselves.
upvoted 2 times
...
...
...
SorenBendixen
1 year ago
Selected Answer: D
Its D - According to this : https://aws.amazon.com/blogs/awsmarketplace/controlling-access-to-a-well-architected-private-marketplace-using-iam-and-aws-organizations/
upvoted 2 times
SorenBendixen
1 year ago
Its C. D is wrong - missed : "procurement-manager-role in all AWS accounts that will be used by DEVELOPERS"
upvoted 2 times
...
...
NikkyDicky
1 year, 1 month ago
Selected Answer: C
Its a C
upvoted 1 times
...
gd1
1 year, 2 months ago
Selected Answer: C
C is correct-
upvoted 1 times
...
Maria2023
1 year, 2 months ago
Selected Answer: C
D is a distractor since the developers do not need to administer the private marketplace. Plus that the procurement team acts only in the shared accounts. That leaves C as the only option
upvoted 4 times
...
Jackhemo
1 year, 2 months ago
Selected Answer: C
From olabiba.ai: The MOST efficient way to design an architecture to meet these requirements is option C. Explanation: - Create an IAM role named procurement-manager-role in all the shared services accounts in the organization. - Add the AWSPrivateMarketplaceAdminFullAccess managed policy to the role. - Create an organization root-level SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. - Create another organization root-level SCP to deny permissions to create an IAM role named procurement-manager-role to everyone in the organization. This approach ensures that only the procurement managers, who assume the procurement-manager-role, have administrative access to Private Marketplace. Other IAM users, groups, roles, and account administrators in the company are denied access to Private Marketplace administrative functions.
upvoted 3 times
...
rtguru
1 year, 3 months ago
Correct answer is D
upvoted 1 times
chikorita
1 year, 2 months ago
answer without proper justifications won't add up additionally, the 4th option does not mention "root" level which in-turn is most efficient way of solving the problem so the correct answer is C the correct answe
upvoted 2 times
...
...
Sarutobi
1 year, 4 months ago
Selected Answer: C
Very similar to this blog https://aws.amazon.com/blogs/awsmarketplace/controlling-access-to-a-well-architected-private-marketplace-using-iam-and-aws-organizations/. In here there are more details.
upvoted 3 times
...
mfsec
1 year, 5 months ago
Selected Answer: C
Create an IAM role named procurement-manager-role in all the shared services accounts in the organization.
upvoted 1 times
...
cudbyanc
1 year, 6 months ago
Selected Answer: C
Confirmed
upvoted 1 times
...
zozza2023
1 year, 6 months ago
Selected Answer: C
should be C i guess
upvoted 1 times
...
ask4cloud
1 year, 7 months ago
Selected Answer: C
This approach allows the procurement managers to assume the procurement-manager-role in shared services accounts, which have the AWSPrivateMarketplaceAdminFullAccess managed policy attached to it and can then manage the Private Marketplace. The organization root-level SCP denies the permission to administer Private Marketplace to everyone except the role named procurement-manager-role and another SCP denies the permission to create an IAM role named procurement-manager-role to everyone in the organization, ensuring that only the procurement team can assume the role and manage the Private Marketplace. This approach provides a centralized way to manage and restrict access to Private Marketplace while maintaining a high level of security.
upvoted 3 times
...
masetromain
1 year, 8 months ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/28410-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 3 times
...
Question #32 Topic 1

A company is in the process of implementing AWS Organizations to constrain its developers to use only Amazon EC2, Amazon S3, and Amazon DynamoDB. The developers account resides in a dedicated organizational unit (OU). The solutions architect has implemented the following SCP on the developers account:

When this policy is deployed, IAM users in the developers account are still able to use AWS services that are not listed in the policy.
What should the solutions architect do to eliminate the developers’ ability to use services outside the scope of this policy?

  • A. Create an explicit deny statement for each AWS service that should be constrained.
  • B. Remove the FullAWSAccess SCP from the developers account’s OU.
  • C. Modify the FullAWSAccess SCP to explicitly deny all services.
  • D. Add an explicit deny statement using a wildcard to the end of the SCP.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
B (66%)
D (30%)
3%

zhangyu20000
Highly Voted 1 year, 8 months ago
B is correct because default FullAWSAccess SCP is applied
upvoted 17 times
...
Six_Fingered_Jose
Highly Voted 11 months, 2 weeks ago
Selected Answer: B
If you go to AWS management console and look up how SCP works, you will find that by default FullAWSAccess policy is attached to all OUs by default if you have SCP enabled.
upvoted 9 times
jainparag1
9 months ago
That's correct. You can disable AWSFullAccess SCP from member accounts as long as you are replacing it with another policy with specific permissions required.
upvoted 2 times
...
...
MAZIADI
Most Recent 2 weeks, 1 day ago
Selected Answer: B
B. Remove the FullAWSAccess SCP from the developers account’s OU. Explanation: FullAWSAccess SCP: By default, AWS Organizations attaches a FullAWSAccess SCP to all OUs and accounts, allowing access to all AWS services unless restricted by another SCP. If this SCP is still attached to the developers' OU, it will allow access to all services, regardless of the more restrictive SCP you have applied. SCP Behavior: SCPs are evaluated in an "implicit deny" model. If an action is not explicitly allowed by the SCPs, it is implicitly denied. However, if multiple SCPs are attached and one allows an action (like FullAWSAccess), that action is permitted unless explicitly denied in another SCP.
upvoted 1 times
...
felon124
2 weeks, 5 days ago
Selected Answer: B
AWS Organizations attaches an AWS managed SCP named FullAWSAccess to every root, OU and account when it's created. This policy allows all services and actions. You can replace FullAWSAccess with a policy allowing only a set of services so that new AWS services are not allowed unless they are explicitly allowed by updating SCPs. https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_evaluation.html
upvoted 1 times
...
8693a49
3 weeks, 6 days ago
Selected Answer: D
Best practice would be to create an explicit deny statement. The reason is that other SCPs could be in effect, aside from AWSFullAccess, that could grant access to other services. If the goal is to deny access to any other service, then this must be made explicit.
upvoted 1 times
...
vip2
3 weeks, 6 days ago
Selected Answer: B
B is correct Remove from develop account OU --> implicitly deny all service -->add explicity 'allow' to restirct only allow related services in SCP.
upvoted 1 times
...
Moghite
1 month ago
Selected Answer: D
{ "Sid": "ExplicitDeny", "Effect": "Deny", "NotAction": [ "ec2:*", "dynamodb:*", "s3:*" ], "Resource": "*" }
upvoted 2 times
...
Helpnosense
2 months, 1 week ago
Selected Answer: D
FullAWSAccess SCP is inherited from root. Can't be removed from OU. D is correct answer.
upvoted 2 times
sam2ng
1 week, 6 days ago
It can be, read "How SCPs work with Allow" in here it shows example: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_evaluation.html
upvoted 1 times
...
...
qaz12wsx
4 months, 1 week ago
Selected Answer: D
{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowEC2", "Effect": "Allow", "Action": "ec2:*", "Resource": "*" }, { "Sid": "AllowDynamoDB", "Effect": "Allow", "Action": "dynamodb:*", "Resource": "*" }, { "Sid": "AllowS3", "Effect": "Allow", "Action": "s3:*", "Resource": "*" }, { "Sid": "ExplicitDeny", "Effect": "Deny", "NotAction": [ "ec2:*", "dynamodb:*", "s3:*" ], "Resource": "*" } ] }
upvoted 4 times
...
Dgix
5 months, 2 weeks ago
Selected Answer: D
D - the alternative doesn't mention an ASG which must be taken as implied. The other solutions are simply absurd: A: The operational overhead is ENORMOUS. To those who think that "operational overhead" is only day-to-day maintenance: it is not. It encompasses ALL CHANGES to the infrastructure. B: Kubernetes is the very definition of operational overhead. Always avoid unless there is an absolutely compelling reason to use it. C: And what do you people think the function of the Lambda is? None. D: This works and is the most straightforward as soon as you realise that the ASG is implied. In the final analysis, this is another example of how AWS exam questions leave out information in order to trip you up.
upvoted 2 times
...
Dafukubai
6 months, 1 week ago
Selected Answer: D
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_evaluation.html FullAWSAccess NOT inherited. It must be set at every OU layer. B is the most inadvisable choice because target account will get a explicitly DENY for all AWS services including EC2 etc if delete FullAWSAccess at it OU.
upvoted 2 times
...
8608f25
6 months, 2 weeks ago
Selected Answer: D
To eliminate the developers’ ability to use AWS services outside the scope of Amazon EC2, Amazon S3, and Amazon DynamoDB, the solutions architect should: * D. Add an explicit deny statement using a wildcard to the end of the SCP. This action effectively restricts access to only the specified services by explicitly denying access to all other AWS services. The corrected Service Control Policy (SCP) would look something like this: { "Sid": "ExplicitDenyAllOtherServices", "Effect": "Deny", "NotAction": [ "ec2:", "dynamodb:", "s3:" ], "Resource": "*" }
upvoted 4 times
8608f25
6 months, 2 weeks ago
Full SCP: { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowEC2", "Effect": "Allow", "Action": "ec2:", "Resource": "*" }, { "Sid": "AllowDynamoDB", "Effect": "Allow", "Action": "dynamodb:", "Resource": "*" }, { "Sid": "AllowS3", "Effect": "Allow", "Action": "s3:", "Resource": "*" }, { "Sid": "ExplicitDenyAllOtherServices", "Effect": "Deny", "NotAction": [ "ec2:", "dynamodb:", "s3:" ], "Resource": "*" } ] }
upvoted 2 times
8608f25
6 months, 2 weeks ago
Explanation: * Option A is less efficient because creating an explicit deny statement for each AWS service except EC2, S3, and DynamoDB would be impractical given the large number of services AWS offers. * Option B suggests removing the FullAWSAccess SCP from the developers account’s OU. While removing FullAWSAccess could potentially restrict access, it’s not as direct or effective as implementing an explicit deny. The FullAWSAccess SCP allows all actions on all resources within the account or OU it’s applied to, and simply removing it doesn’t automatically restrict access to only the specified services. * Option C suggests modifying the FullAWSAccess SCP to explicitly deny all services. However, the FullAWSAccess SCP is a default SCP applied by AWS Organizations and should generally be left as is. Custom SCPs should be created to enforce specific policies. * Option D is the most direct and effective approach.
upvoted 3 times
...
...
...
LazyAutonomy
6 months, 3 weeks ago
Selected Answer: B
ignore my previous comment
upvoted 2 times
...
LazyAutonomy
6 months, 3 weeks ago
Selected Answer: A
By default, FullAWSAccess is applied at the root, so all member accounts in all OUs will inherit this policy. Removing FullAWSAccess SCP from a specific OU isn't enough. Answer is A.
upvoted 1 times
LazyAutonomy
6 months, 3 weeks ago
Ahh, thanks to @gustori99 for pointing out my incorrect understanding. SCPs are not inherited. See https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_evaluation.html
upvoted 1 times
LazyAutonomy
6 months, 3 weeks ago
The answer is B.
upvoted 1 times
...
...
...
Vaibs099
6 months, 4 weeks ago
A is correct - Removing FullAWSAccess SCP from the developer account only is not going to help. As FullAWSAccess allowing all is also being inherited from the root and Parent OUs. When SCP is enable FullAWSAccess is enabled by default. One option is replacing FullAWSAccess on root and all Parent OUs and developer account to the SCP mentioned in question allowing only three service. If we are only removing FullAWSAccess SCP from developer's account then we will have to explicitly deny all other services not required.
upvoted 1 times
...
gustori99
6 months, 4 weeks ago
Selected Answer: A
It seams that almost no one understands how SCPs are evaluated: From the documentation: For a permission to be allowed for a specific account, there must be an explicit Allow statement at every level from the root through each OU in the direct path to the account (including the target account itself). https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_evaluation.html So FullAWSAccess at the root level is NOT inherited. It must be present at ALL levels. B is wrong because when you remove FullAWSAccess at the OU level and do not replace it with an allow list of the permitted services, ALL services will be denied even if you have an allow list on account level. C and D does't make sense.
upvoted 1 times
LazyAutonomy
6 months, 3 weeks ago
The question states clearly that 3 services are permitted by the new SCP attached to the developer OU. The answer is B.
upvoted 1 times
gustori99
6 months ago
The question states "the solutions architect has implemented the following SCP on the developers account". In my understanding the SCP is attached on the developers account not on the OU level. If SCP is attached on OU level then B is correct. If it is attached on the account B cannot be correct.
upvoted 1 times
...
...
...
TheHowesHold
7 months ago
Selected Answer: B
B -AWS Organizations attaches an AWS managed SCP policy named FullAWSAccess which allows all services and actions. If this policy is removed and not replaced at any level of the organization, all OUs and accounts under that level would be blocked from taking any actions.
upvoted 2 times
...
ele
7 months, 4 weeks ago
Selected Answer: D
Right answer is D. D: explicit deny will override any allow inherited from root. AWS doc: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_evaluation.html#how_scps_deny not A as it is not efficient. not B as it will not help if root still has FullAccess not C as it is not possible to modify
upvoted 2 times
...
ninomfr64
8 months, 1 week ago
Selected Answer: B
Services are implicitly denied and you allow services with SCP (ore explicitly deny). In this scenario an SCP applied to higher level is allowing more services thus B
upvoted 2 times
...
subbupro
8 months, 3 weeks ago
Best approach - apply the deny in the root level - it is a must one best practices. When you create the organization we need to first create the below statement, Create an explicit default deny statement for each AWS service that should be constrained.
upvoted 1 times
...
eurriola10
9 months ago
Selected Answer: B
B is correct. Review this link under Sandbox OU Scenario 2 https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_evaluation.html#strategy_using_scps
upvoted 3 times
...
edder
9 months ago
Selected Answer: B
The answer is B. When I actually tried it, except for A, the behavior was as follows. B: Services outside the scope of the policy cannot be used. C: All services are unavailable. D: All services are unavailable.
upvoted 5 times
...
severlight
9 months, 2 weeks ago
Selected Answer: B
B, they are able to access, hence all current SCPs including parent ones have explicit allow. Removing explicit allow from the current OU will be enough to deny access.
upvoted 2 times
...
AMohanty
11 months, 2 weeks ago
A SCP is a DENY statement, its NOT designed to PERMIT/ALLOW service access.
upvoted 1 times
...
chico2023
1 year ago
Selected Answer: A
Answer: A
upvoted 1 times
chico2023
1 year ago
As bad as it sounds, I still think it's the less wrong answer and I can explain my understanding below:
upvoted 2 times
...
chico2023
1 year ago
By reading the answer "Remove the FullAWSAccess SCP from the developers account’s OU", it's clear that you are removing the "FullAWSAccess SCP" from the developers OU, not from the Root OU. This way, if the company has a FullAWSAccess SCP (as AWS managed policy) in the Root OU, removing the same one from the Developer, won't change a thing. C doesn't make much sense the way it was put. If it is a managed policy, you can't change. If it's not, why modify it with a deny? It would be much better to just detach it and attach a more restrictive one. I wouldn't choose D as answer because if you have both a deny and an allow statement in a SCP, the deny statement takes precedence over the allow statement. In summary, as we don't know if they have a FullAWSAccess SCP in their root account, or are using an allow list, the only way I can think (at least for now) to be sure that developers won't be able to use services outside the scope of the aforementioned policy, is by denying the rest described in A.
upvoted 4 times
jpa8300
7 months, 4 weeks ago
This explanation makes much more sense than the others, say I would go to A too.
upvoted 1 times
...
...
...
Christina666
1 year, 1 month ago
Selected Answer: B
If you reenable SCPs on the organization root, all entities are reset to being attached to only the default FullAWSAccess SCP.
upvoted 2 times
...
SmileyCloud
1 year, 1 month ago
Selected Answer: D
It's D actually. If you remove the FullAWSAccess you are still inheriting the same policy from the root account. See this: https://imgur.com/a/2EMUm0S This means you have to remove the same SCP from root. On top of that, AWS has the same use case here -> https://aws.amazon.com/blogs/industries/best-practices-for-aws-organizations-service-control-policies-in-a-multi-account-environment/
upvoted 4 times
Arnaud92
1 year, 1 month ago
Is it a recommended practice to have a FullAWSAccess + a Deny in another SCP?
upvoted 1 times
...
...
NikkyDicky
1 year, 1 month ago
Selected Answer: B
Replace default allow SCP
upvoted 2 times
...
Parimal1983
1 year, 2 months ago
Selected Answer: B
Instead of creating explicit deny for each and every service, it is efficient way to remove root level allow SCP for all services and add explicit SCP with EC2, S3 and DynamoDB to developer OU
upvoted 4 times
...
ailves
1 year, 2 months ago
Selected Answer: B
According to https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html we have to replace (not remove) SCP. "To use SCPs as an allow list, you must replace the AWS managed FullAWSAccess SCP with an SCP that explicitly permits only those services and actions that you want to allow".
upvoted 4 times
...
gameoflove
1 year, 3 months ago
Selected Answer: B
FullAWSAccess must be resolved
upvoted 2 times
...
Maria2023
1 year, 4 months ago
Selected Answer: B
Initially I voted for A but then I saw the following statement : "AWS services that aren't explicitly allowed by the SCPs associated with an AWS account or its parent OUs are denied access to the AWS accounts or OUs associated with the SCP. SCPs associated to an OU are inherited by all AWS accounts in that OU"
upvoted 2 times
...
Sarutobi
1 year, 4 months ago
Selected Answer: D
B says: "Remove the FullAWSAccess SCP from the developers account’s OU", with the information we have here there is no way to guarantee the SCP is applied to the developers account's OU. It can be any place from the root all the way down to the developer's OU.
upvoted 3 times
...
frfavoreto
1 year, 4 months ago
Selected Answer: B
'B' is the BEST answer, but not the only correct one. 'D' is also technically correct, because adding a wildcard DENY statement would override the FullAWSAccess SCP attached by default to the OU and it would have the same final result. However 'B' is more appropriate here, the so called best practice. This is what 'Professional' exam certs are all about.
upvoted 3 times
...
mfsec
1 year, 5 months ago
Remove the FullAWSAccess SCP from the developers account’s OU
upvoted 1 times
...
Ajani
1 year, 5 months ago
An allow list strategy has you remove the FullAWSAccess SCP that is attached by default to every OU and account. This means that no APIs are permitted anywhere unless you explicitly allow them. To allow a service API to operate in an AWS account, you must create your own SCPs and attach them to the account and every OU above it, up to and including the root. Every SCP in the hierarchy, starting at the root, must explicitly allow the APIs that you want to be usable in the OUs and accounts below it A deny list strategy makes use of the FullAWSAccess SCP that is attached by default to every OU and account. This SCP overrides the default implicit deny, and explicitly allows all permissions to flow down from the root to every account, unless you explicitly deny a permission with an additional SCP that you create and attach to the appropriate OU or account If the developers can access other services it implies the "Deny List Strategy" hence FullAWSAccess is in place and should be removed
upvoted 3 times
...
Gabehcoud
1 year, 6 months ago
the question doesn't state that there is another SCP applied to developers account. By choosing B, are we just assuming ? Why can't it be D?
upvoted 2 times
atlasga
1 year, 4 months ago
It's applied by default.
upvoted 2 times
...
...
moota
1 year, 6 months ago
I was confused at first but the intersection of sets here allowed me to understand the flow of SCPs from root to child OUs https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_inheritance_auth.html
upvoted 3 times
...
jooncco
1 year, 6 months ago
Selected Answer: B
B is correct. By removing FullAWSAccess SCP, default deny will be applied.
upvoted 4 times
...
AjayD123
1 year, 7 months ago
Selected Answer: B
B is correct https://docs.aws.amazon.com/organizations/latest/APIReference/API_DetachPolicy.html
upvoted 2 times
...
masetromain
1 year, 8 months ago
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/46899-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 4 times
...
Question #33 Topic 1

A company is hosting a monolithic REST-based API for a mobile app on five Amazon EC2 instances in public subnets of a VPC. Mobile clients connect to the API by using a domain name that is hosted on Amazon Route 53. The company has created a Route 53 multivalue answer routing policy with the IP addresses of all the EC2 instances. Recently, the app has been overwhelmed by large and sudden increases to traffic. The app has not been able to keep up with the traffic.
A solutions architect needs to implement a solution so that the app can handle the new and varying load.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API.
  • B. Containerize the API logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Run the containers in the cluster by using Amazon EC2. Create a Kubernetes ingress. Update the Route 53 record to point to the Kubernetes ingress.
  • C. Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group. Configure the Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record.
  • D. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (42%)
D (34%)
C (25%)

EricZhang
Highly Voted 1 year, 8 months ago
Selected Answer: A
Serverless requires least operational effort.
upvoted 32 times
lkyixoayffasdrlaqd
1 year, 6 months ago
How can this be the answer ?? It says: Separate the API into individual AWS Lambda functions. Can you calculate the operational overhead to do that?
upvoted 17 times
scuzzy2010
1 year, 4 months ago
Separating would be development overhead, but once done, the operational overheard (operational = ongoing day-to-day) will be the least.
upvoted 12 times
24Gel
5 months, 2 weeks ago
disagree, ASG in Option D, after set up, operational is not overheat as well
upvoted 1 times
24Gel
5 months, 2 weeks ago
i mean Option C not D
upvoted 1 times
24Gel
5 months, 2 weeks ago
never mind, A is simpler than C
upvoted 1 times
...
...
...
...
...
Jay_2pt0_1
1 year, 3 months ago
From any type of real-world perspective, this just can't be the answer IMHO. Surely AWS takes "real world" into account.
upvoted 1 times
...
I guess multivalue answer routing in Route53 is not proper load balancing so replacing multivalue answer routing with ALB would proper balance the load (with minimal effort)
upvoted 3 times
...
...
jooncco
Highly Voted 1 year, 6 months ago
Selected Answer: C
Suppose there are a 100 REST APIs (Since this application is monolithic, it's quite common). Are you still going to copy and paste all those API codes into lambda? What if business logic changes? This is not MINIMAL. I would go with C.
upvoted 25 times
chathur
1 year, 2 months ago
"Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record. " This does not make any sense, why do you need to change R53 records using a Lambda?
upvoted 1 times
Vesla
1 year ago
Because if you have 4 ec2 in your ASG you need to have 4 records in domain name if ASG scale up to 6 for example you need 2 add 2 records more in domain name
upvoted 4 times
liquen14
5 months, 3 weeks ago
Too contrived in my opinion, and what about DNS caches in the clients?. You coul get stuck for a while with the previous list of servers. I think it's has to be A (but it would involve a considerable development effort) or D which is extremely easy to implement but and the same time it sounds a little bit fishy because they don't mention anything about ASG or scaling I hate this kind of questions and I don't understand what kind of useful insight they provide unless they want us to become masters of the art of dealing with ambiguity
upvoted 3 times
cnethers
2 months, 1 week ago
Agree that D does not scale to meet demand, it's just a better way to load balance which was being done at R53 before so the scaling issue has not been resolved. Also agree A requires more dev effort and less ops effort, so I would have to lean to A... Answer selection is poor IMO
upvoted 1 times
...
...
...
...
scuzzy2010
1 year, 6 months ago
It says "a monolithic REST-based API " - hence only 1 API. Initially I thought C, but I'll go with A as it says least operation overhead (not least implementation effort). Lambda has virtually no operation overhead compared to EC2.
upvoted 8 times
aviathor
1 year, 1 month ago
Answer A says "Separate the API into individual AWS Lambda functions." Makes me think there may be many APIs. However, we are looking to minimize operational effort, not development effort...
upvoted 1 times
...
Jay_2pt0_1
1 year, 3 months ago
A monolithic REST api likely has a gazillion individual APIs. This refactor would not be a small one.
upvoted 5 times
...
...
jainparag1
9 months ago
Dealing with business logic change is applicable to existing solution or any solution based on the complexity. Rather it's easier to deal when these are microservices. You shouldn't hesitate to refactor your application by putting one time effort (dev overhead) to save significant operational overhead on daily basis. AWS is pushing for serverless only for this.
upvoted 1 times
...
...
Jason666888
Most Recent 3 weeks, 1 day ago
Selected Answer: A
It has to be A, period. Problem with C: muti-value has an upper limit: 8. Route 53 responds to DNS queries with up to eight healthy records and gives different answers to different DNS resolvers. Also you need to manage the elastic IP's attachment everytime when new instances scale up for route53 multi-value routing Problem with D: multi-value cannot work with load balancers. please check doc here: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-multivalue.html
upvoted 1 times
Jason666888
3 weeks, 1 day ago
So for option C, if you upscale to 8 instances and the API still get overwhelmed, then there's nothing more you can do about it
upvoted 1 times
...
...
Syre
3 weeks, 6 days ago
Selected Answer: C
A requires more work and it's not practical D cannot be the answer, why are we even moving instances to the private subnet in the first place? No security issues or other issues mentioned here.
upvoted 1 times
Reval
3 weeks, 6 days ago
Creating an Auto Scaling group and managing updates to Route 53 records via a Lambda function involves more complexity and management. The use of an ALB (as in Option D) is more efficient, as it inherently provides load balancing and scaling features without the need to update DNS records constantly.
upvoted 1 times
...
...
zolthar_z
1 month, 1 week ago
Selected Answer: D
Keep it simple, we can't assume if the API is a large/small application, the idea is make the least operational overhead and that is only add a ALB, We don't know the effort to move the application to a lambda, Answer is D
upvoted 2 times
8693a49
3 weeks, 6 days ago
True, some apps won't work well on Lambda. On the other hand option D is missing auto-scaling, which means it won't cope with increasing traffic. Assuming the app can be ported to Lambda, A satisfies all requirements: scalability and very low operational effort.
upvoted 1 times
...
...
Moghite
1 month, 2 weeks ago
Selected Answer: D
Response is D A- Requires significant refactoring of the application B- solution complex and requires containerizing application. C- The multi-value answer routings less flexible compared to using an ALB for load balancing.
upvoted 2 times
8693a49
3 weeks, 6 days ago
Refactoring is not operational effort. Operational effort is the routine work done once the application is in production (patching OS, monitoring logs, restarting servers, increasing capacity, etc). Serverless always has the lowest operational effort for the customer because AWS do it behind the scenes.
upvoted 1 times
...
mns0173
4 weeks, 1 day ago
ALB won't help you with scaling. Obviously clear case for C
upvoted 1 times
...
...
subbupro
2 months, 2 weeks ago
D is a perfect, least operation effort. C needs to write lamda func which is over head.
upvoted 1 times
...
www_Certifiedumps_com
2 months, 3 weeks ago
Selected Answer: D
The least operational overhead solution is: D. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets. Add the instances as targets for the ALB. Update the Route 53 record to point to the ALB.
upvoted 16 times
cnethers
2 months, 1 week ago
D does not scale to meet demand, it's just a better way to load balance which was being done at R53 before so the scaling issue has not been resolved. A requires more dev effort (not a consideration in the question) and less ops effort, so I would have to lean to A... Answer selection is poor IMO for this question ..
upvoted 2 times
...
...
ahhatem
2 months, 3 weeks ago
Selected Answer: A
The questions requires least operational effort... Nothing mentions the dev work to refactor!
upvoted 1 times
ahhatem
2 months, 3 weeks ago
In addition, a monolithic REST-API does not necessarily require huge work to work effectively on lambda.... It depends on how it is written, might be very easy or very complicated!
upvoted 1 times
...
...
nkv_3762
3 months ago
Selected Answer: C
C should be the answer A - IMO, it's not feasible given the entire application is a monolithic, so we can't just refactor to separate into Lambda functions. D - Since there is no mention of ASG, this is ruled out. This does nothing to address the high volume requests.
upvoted 2 times
...
higashikumi
3 months ago
Selected Answer: D
To handle the monolithic REST-based API being overwhelmed by traffic with minimal operational overhead, the best solution involves placing an Application Load Balancer (ALB) in front of the EC2 instances and moving these instances to private subnets within the VPC. The ALB effectively distributes incoming traffic across multiple instances, preventing any single instance from being overloaded. Additionally, integrating Auto Scaling with the ALB ensures that the number of EC2 instances dynamically adjusts based on traffic load, maintaining performance and availability. This approach avoids the extensive development and refactoring efforts required by other solutions, providing a scalable and reliable setup with minimal changes to the existing infrastructure.
upvoted 2 times
...
Malcnorth59
3 months, 1 week ago
I am going to select D. If you look at what has been implemented, it effectively tries to do what an ALB + ASG does. Option A is attractive but I believe it is not the one with LEAST operational overhead. It requires a complete re-architecting and redevelopment of the solution whereas D can be done with minimal change by an operations team
upvoted 1 times
...
qaz12wsx
4 months, 1 week ago
Selected Answer: D
I go with D
upvoted 1 times
mifune
4 months, 1 week ago
That solution would be ideal, except that the question asks for how to resolve the increasement of requests. An ALB does not scales, so C is the correct anwser.
upvoted 1 times
...
...
lasithasilva709
4 months, 3 weeks ago
Selected Answer: C
I choose C. A,B may need significant development effort to refactor D doesn’t address the major issue which is scaling
upvoted 2 times
...
Smart
4 months, 3 weeks ago
Selected Answer: A
There is a difference between development burden of refactoring and operational burden.
upvoted 1 times
...
43c89f4
4 months, 4 weeks ago
A. partial correct - because monolithic application, if EC2 are not handle i dont think Lamda can handle the traffic. i can go for D. - because of Multi-value, ALB, TG,ASG
upvoted 1 times
...
VerRi
5 months ago
Selected Answer: D
A will work, but not the least operational overhead.
upvoted 1 times
...
mav3r1ck
5 months ago
Selected Answer: D
Choosing option A — separating the API into individual AWS Lambda functions and configuring an Amazon API Gateway REST API with Lambda integration — does present a modern, highly scalable solution that could theoretically handle new and varying loads with potentially lower operational overhead once implemented.
upvoted 1 times
mav3r1ck
5 months ago
There are several reasons why it might not be considered the best option with the "least" operational overhead in this specific scenario: Refactoring Effort: Transforming a monolithic application into a set of microservices or serverless functions can be a significant undertaking. It requires a thorough analysis of the existing application architecture, identifying logical separations between different parts of the application, and then implementing those separations. This process can be time-consuming and requires careful planning to ensure that the application continues to function correctly as a set of more granular services.
upvoted 1 times
mav3r1ck
5 months ago
Testing and Debugging Challenges: Serverless applications, due to their distributed nature, can present unique challenges for testing and debugging. Ensuring that the application behaves correctly as a collection of independently deployed functions requires comprehensive integration testing. Debugging issues can also be more complex compared to a monolithic architecture, where the application components are more tightly coupled.
upvoted 1 times
...
mav3r1ck
5 months ago
Development and Deployment Overhead: Initially, moving to AWS Lambda and API Gateway involves a different approach to application development, deployment, and monitoring. Teams may need to familiarize themselves with serverless architectures, adapt deployment pipelines, and implement new monitoring and logging solutions suitable for serverless environments. This learning curve and setup can introduce additional overhead before the benefits of reduced operational management are realized.
upvoted 1 times
...
...
mav3r1ck
5 months ago
In contrast, option D — creating an Application Load Balancer (ALB) in front of the API and updating the infrastructure to better manage traffic through scaling and health checks — offers a balance between reducing operational overhead and implementing the solution with minimal changes to the existing application architecture. It provides an immediate solution to the problem of handling varying loads without the significant upfront investment in refactoring the application or the learning curve associated with adopting serverless technologies.
upvoted 1 times
...
...
red_panda
5 months, 1 week ago
Selected Answer: C
For me it's C. Answer A it's impossible. Can you imagine how much time do we need to refactor the application into n API/functions? Answer B and C make no sense. The only one is C, for me.
upvoted 3 times
Helpnosense
1 month, 1 week ago
Agree. Answer A is to turn monolithic application to micro service. The time spend on this work vs the time to creating ASG + Lambda code retrieving ec2 ip and update R53. Obviously answer C is least effort. D is wrong because just ALB without asg will not make any change to the load processing.
upvoted 1 times
...
...
kz407
5 months, 1 week ago
Selected Answer: D
Problem I have with A, is the overhead of rearchitecting the application code. Monolithic REST API into Lambda means time and money spent on redesigning, development, testing and deployment. That's also "operational overhead" IMHO. Option D on the other hand is quite straightforward. Only thing missing for that to be the obvious go-to is that it doesn't mention EC2 autoscaling. As far as the current set up is concerned, given that the only form of load balancing available is the multivalue DNS responses, it's quite possible that always the top most IP in the list gets the most hits. When quite a high traffic hits the target, it goes down (no replacement is spun up either) resulting in the IP of that EC2 is not included in the subsequent DNS responses. Eventually, you are gonna exhaust the entire set of EC2 instance. With this behaviour being more likely than not, trying to revamp the monolith into AWS Lambda would be overkill, and it brings way too much operational overhead as well.
upvoted 2 times
...
Dgix
5 months, 3 weeks ago
The correct answer is D, believe it or not. "LEAST OPERATIONAL OVERHEAD", remember? Refactoring the monolith constitutes substantial overhead. The reason D isn't immediately apparent as the correct solution is that D doesn't mention that autoscaling _can_ be used, but the operational overhead is practically zero. This is just another "fine" example of AWS wording their questions and replies in an incomplete, ambiguous manner (which we all hate) :).
upvoted 1 times
JOKERO
5 months, 3 weeks ago
but have you answered this request : implement a solution so that the app can handle the new and varying load. !!
upvoted 1 times
...
...
_Jassybanga_
6 months, 2 weeks ago
I will go with A - may be complex development with simple operations overhead as we are going full serverless here. The option D does not make sense , we are putting the API as target group for ALB but at the same time we want to point the EC2 Ip to the ALB.. not clear to be honest .
upvoted 1 times
_Jassybanga_
6 months, 2 weeks ago
Sorry I read it wrong - answer can be D as well, Route53 - ALB - Ec2 - Least changes needs to be done and is a fine working solution
upvoted 1 times
_Jassybanga_
6 months, 2 weeks ago
Actually the answer should be D as the main problem is load balancing which is achived by using ALB. in A the load balancing is still not happening
upvoted 2 times
...
...
...
AWSPro1234
7 months, 2 weeks ago
Selected Answer: D
D is correct.
upvoted 3 times
...
GabrielShiao
7 months, 3 weeks ago
Selected Answer: D
D is least effort, no code change, auto scaling, HA. Changing code is not easy. Even if c is workable but it is not so load balancing since multivalue answer can return 8 records at maximum which is not a good choice
upvoted 2 times
grire974
7 months, 2 weeks ago
Yeh but D doesn't mention autoscaling; I almost wrote D; but I think it's A. Just because the load is balanced doesn't mean it can handle excess demand. Ordinarily and ALB would be attached to an ASG; but an ASG isn't mentioned. And and ALB can connect to ec2 without an ASG.
upvoted 2 times
...
...
learnwithaniket
7 months, 4 weeks ago
Selected Answer: D
Least operational overhead is D. Since they already have EC2 instances running. Creating API Gateway and Lambda requires efforts.
upvoted 2 times
...
jpa8300
7 months, 4 weeks ago
this is the kind of questions where the answers are divided. Reading the explanations in this discussion I would also choose option A, because it is the one that will cause least overhead in the day by day work, but I also agree with people that say that converting a monolithic app into a Lambda function is not easy. Options C and D are also correct and they are both a dood choice and one complements the other. If you have several EC2 and you need to sale out and in, you must have a ASG. On the other hand the ALB is needed to spread the load between the EC2. So in summary it is not easy to choose the correct answer here, but I still go to A, because of the requirement 'LEAST' overhead.
upvoted 2 times
...
ninomfr64
8 months, 1 week ago
Selected Answer: A
Not B as EKS and EC2 requires a lot of work Not C as EC2 requires work and the lambda to update R53 upon scale out requires more work (and it is cumbersome) Not D as EC2 requires more work than Lambda A because R53, API GW and Lambda are all serverless and managed services
upvoted 1 times
...
subbupro
8 months, 3 weeks ago
A and B are operation over head , we need to do the rearchitecture, and D is not having any scope for auto scaling just using an ALB and Route 53 . So C would be best
upvoted 1 times
grire974
7 months, 2 weeks ago
Route53 can only return a max of 8 healthy values; so there's an upper limit on how this could scale: https://aws.amazon.com/route53/faqs/#:~:text=If%20you%20want%20to%20route,response%20to%20each%20DNS%20query. ignoring the fact that its also quite unorthodox.
upvoted 1 times
...
...
KevinYao
9 months ago
Selected Answer: C
A. it needs more development job, and it's hard to do rollup upgrade. B. EKS cost is high D. It's hard to auto scalling without ASG.
upvoted 1 times
grire974
7 months, 2 weeks ago
Route53 can only return a max of 8 healthy values; so there's an upper limit on how this could scale: https://aws.amazon.com/route53/faqs/#:~:text=If%20you%20want%20to%20route,response%20to%20each%20DNS%20query. ignoring the fact that its also quite unorthodox.
upvoted 1 times
...
...
enk
9 months ago
Selected Answer: D
Least operational overhead is D. The API is already on EC2 instances. Transitioning to serverless just screams operational overhead IMO.
upvoted 3 times
grire974
7 months, 2 weeks ago
D has no asg.
upvoted 2 times
...
...
severlight
9 months, 2 weeks ago
Selected Answer: A
check EricZhang's answer
upvoted 1 times
...
rlf
10 months, 1 week ago
Answer is A. C is wrong. R53. Multivalue answer routing is not a substitute for Elastic Load Balancing (ELB). Route 53 randomly selects any 8 records and it does not mention about launch template (placing all existing instances to ASG ?) D is wrong. ALB does not support REST without API gateway http integration. https://repost.aws/knowledge-center/api-gateway-application-load-balancers
upvoted 2 times
...
whenthan
10 months, 1 week ago
Selected Answer: D
A - X [code refactoring and rearch] B-X [containerizing - more overhead on EC2s] C-X [ lambda function to frequently update -- more overhead] D-- is correct
upvoted 2 times
...
longns
11 months ago
Selected Answer: D
The question is just to test your knowledge about issue detection. The issue here is Route 53 with multi records. The system currently doesn't have load balancing.
upvoted 3 times
...
bbastia2
11 months ago
Selected Answer: D
D. Create an Application Load Balancer (ALB) in front of the API: This solution involves setting up an ALB, which can distribute incoming traffic across multiple targets, such as EC2 instances. By moving the EC2 instances to private subnets, you enhance security. The ALB can handle varying loads, and you can also set up an Auto Scaling group for the EC2 instances without needing to update Route 53 records since the ALB's DNS remains constant. This solution provides load balancing, scalability, and simplicity.
upvoted 1 times
covabix879
11 months ago
No mention of ASG. Even with ASG, it can't handle sudden increase
upvoted 2 times
...
swadeey
9 months ago
But moving ec2 from public to private will need to change ips and also login on how users will access application from public IPs to private and will need a lot of overhead to configure. And we need least overhead right?
upvoted 1 times
...
...
AMohanty
11 months, 2 weeks ago
A. D doesn't talk of scaling in or scaling out based on Load. That eliminates D C why do you require a lambda to update R53.EC2 <- ASG <- APIGW <- ALB R53 should do the job B doesnt again talk about ScalingIn and Scaling out Option A, is viable.
upvoted 3 times
...
Simon523
11 months, 3 weeks ago
Selected Answer: A
The problem here is "The app has not been able to keep up with the traffic." so it doesn't cause the EC2 not enough resource, so I guess C is not correct.
upvoted 1 times
...
[Removed]
12 months ago
Selected Answer: C
The core problem is `Recently, the app has been overwhelmed by large and sudden increases to traffic. The app has not been able to keep up with the traffic.`. A never solve the problem as the bottleneck is still on the EC2 instances. B would take tons of efforts. D uses ALB only which do not have any autoscaling feature. C must be the only correct answer
upvoted 2 times
...
Soweetadad
1 year ago
Does ALB even support Rest API (unless you use it with APIGW)? I would go with either A (less right) or C
upvoted 2 times
...
Selected Answer: D
Answer D uses ALB to balance to replace Route 53 multivalue answer routing policy for proper load balancing.
upvoted 2 times
...
chico2023
1 year ago
Selected Answer: D
Answer: D I can't believe that SAP-C02 has this type of questions. Least operational overhead should be A, however the question says exactly this: "A solutions architect needs to implement a solution so that the app can handle the new and varying load." In any moment it says "...implement a solution so that the NEW app can handle..." C is a possibility, but to "Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record"? I wouldn't even think suggesting this, unless customer really wants it. Answer D has the "where is the ASG to handle spikes in traffic" thing, but it's the less worse in my opinion as the issue seems to be related to poor distribution of requests as seen here: "The company has created a Route 53 multivalue answer routing policy with the IP addresses of all the EC2 instances"
upvoted 3 times
...
Asds
1 year, 1 month ago
Selected Answer: A
Can’t be C as they don’t mention elb at all Which leads to…..A
upvoted 1 times
...
softarts
1 year, 1 month ago
Selected Answer: D
should be D, from a developer point of view. A - move implemention from EC2 to lambda? not possible to be least overhead B - EKS also lot overhead C - why use lambda to update route53 records? D - correct answer
upvoted 1 times
...
awsrd2023
1 year, 1 month ago
Selected Answer: A
A: Serverless - Least OPS overhead. Rule Out Factors: B: K8s - OPS overhead + Dev overhead. C: ASG + Lambda seems impractical for sudden and large traffic surges. D: ALB + EC2 is good, but ASG is missing so not addressing traffic surges.
upvoted 2 times
...
Christina666
1 year, 1 month ago
I thought it was C, but the question is "least operational", serverless beats option C I guess, I choose A. Please delete my last comment @Examtopics
upvoted 1 times
...
Christina666
1 year, 1 month ago
Selected Answer: A
I thought it was C, but the question is "least operational", serverless beats option C I guess, and this question only has 5 instances, so I choose A
upvoted 1 times
...
SmileyCloud
1 year, 1 month ago
Selected Answer: A
It's A, keyword here is "least operational" not "least development". So, yes the development effort with A is higher than C, but operational is lower because i don't have to worry about EC2, patching, upgrades, monitoring etc.. "least operational" <<<---
upvoted 2 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: A
Lambda - least ops overhead
upvoted 1 times
...
javitech83
1 year, 2 months ago
Selected Answer: C
I would discard A because the development overhead. D does not have ASG, so only valid options would be C
upvoted 1 times
...
gd1
1 year, 2 months ago
Selected Answer: D
GPT 4.0 Application Load Balancer (ALB) helps distribute incoming traffic across multiple targets, such as Amazon EC2 instances. This distribution helps to increase the availability of your application. ALB can scale automatically to the volume of incoming traffic. Moving the EC2 instances to private subnets in the VPC would also enhance the security posture by reducing the surface area of attack.
upvoted 3 times
...
bcx
1 year, 2 months ago
Selected Answer: C
A is definitely wrong, the questions says that it is a monolithic application running on EC2. It also requires a solution with minimal operational effort. Implementing A would take a lot of time and effort to rewrite the monolithic application to be able to be hosted by Lambda. B is kind of the same, Kubernetes! Containers! OMG, that's a lot of operational effort. So it is C or D, both seem valid. But D does not have autoscaling capability, which means that it could not handle the issue (handle the spikes in traffic).
upvoted 3 times
...
ailves
1 year, 2 months ago
Selected Answer: A
The question is about "LEAST operational overhead" and does it include refactoring mobile app? According to https://docs.aws.amazon.com/whitepapers/latest/microservices-on-aws/serverless-microservices.html AWS supposes that refactoring app is not included in operational overhead, so the answer is A.
upvoted 1 times
...
ZK000001qws
1 year, 2 months ago
There are limitation associated with Lambda function and a monolithic app hosted on VMs would not be best suited to be placed on lambda functions (its a change of archicture). Thus loadbalance replacing domain pointing to each VM is plausible. I would go with D
upvoted 1 times
...
Limlimwdwd
1 year, 3 months ago
Selected Answer: D
qn mentioned 5 EC2 in a VPC, however Route 53 multivalue is mainly benefit for EC2 across region. and the key clue is "The app has not been able to keep up with the traffic" >> seemed to suggest could be due to traffic all route to the 1st EC2 Ip address, but not yet failed and there is no mention of all 5 EC2 has been performing near to high CPU in the R53 multivalue mode Hence having a ALB will distribute the load for processing
upvoted 2 times
...
aca1
1 year, 3 months ago
Selected Answer: A
I will go with A, as this is a Serverless solution and for me the best one the fit to scale and less day to day tasks (operational overhead) I was look really deep to the option C "C. Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group. Configure the Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record" - but this one is missing one really important point, how will the ASG scale if you are just adding the current instance to the ASG, the ASG will need a Launch Template with a standard AMI to launch new EC2s and we do not have it here, we are just addind the current instances inside de ASG.
upvoted 1 times
...
ShinLi
1 year, 3 months ago
I think the answer is C. as they already have 5 EC2 in a public subnet. so, setup an Auto Scaling group is the easiest setup and does not have much operations overhead.
upvoted 1 times
...
iamunstopable
1 year, 4 months ago
C is right. It handles the new and varying load by Autoscaling of EC2 A is wrong It does not handle the new and varying load It is not scalable and brings huge operational overhead
upvoted 2 times
...
Sarutobi
1 year, 4 months ago
Selected Answer: A
I will pick A because this is an EXAM but I maybe not the best idea for real-life implementation. I think D is a great first step, with the added benefit that the EC2 are moved to a private subnet, increasing security. Maybe then I will go for A. C is also possible, but I don't like multi-value to replace a load balancer function, and that solution with Lambda updating route-53, ummm...not sure I like it too much; maybe a life-cycle hook.
upvoted 2 times
...
frfavoreto
1 year, 4 months ago
Selected Answer: A
People get confused between LEAST OPERATIONAL OVERHEAD and IMPLEMENTATION EFFORT. These are 2 different and completely independent concepts.
upvoted 5 times
...
OnePunchExam
1 year, 4 months ago
Selected Answer: A
1. Always when I see this type of question with key requirement 'LEAST operational overhead', many people confusing initial cloud infrastructure setup for new solution as part of overhead which is not. Operation overhead is about maintenance, patching, backups etc. 2. Also the monolithic part is meant to confused, though it is possible (see https://aws.amazon.com/blogs/compute/migrating-a-monolithic-net-rest-api-to-aws-lambda/) 3. Lastly, don't make assumptions about the application. I see comments about 100 REST APIs, refactoring effort etc.
upvoted 4 times
...
mikad
1 year, 4 months ago
since the request is LEAST *operational* overhead, i will go with A
upvoted 1 times
...
takecoffe
1 year, 4 months ago
I will choose D.
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: A
I vote A - sep lambda functions
upvoted 2 times
...
zejou1
1 year, 5 months ago
Selected Answer: A
https://aws.amazon.com/getting-started/hands-on/break-monolith-app-microservices-ecs-docker-ec2/module-one/ and https://docs.aws.amazon.com/whitepapers/latest/microservices-on-aws/serverless-microservices.html Just saying, moving it to a microservice architecture not only makes sense but will remove a lot of operational overhead.
upvoted 3 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: D
This question is the mother of all tricky questions lol The main issue of the current design is that R53 is used to distribute the load to the app. Which is a bad practice. This why i think ALB is the best solution here. Answer A is incorrect because a big refactor and this is the last think you want to think about ! Answer D solve only the Autoscaling, but miss the ALB and still use the R53 as a load balancer!
upvoted 6 times
chathur
1 year, 3 months ago
Sudden bursts of traffic can not be contained using ASGs.
upvoted 1 times
...
rtgfdv3
1 year, 5 months ago
Agree, with u Makes no sense refactor the APP not knowing details ( A & B) I dont see why to create a lambda to add and remove records to route 53 that could be cached as long as the duration of the TTL. ( C )
upvoted 3 times
...
...
doto
1 year, 5 months ago
Selected Answer: C
ccccccccccccc
upvoted 1 times
...
_lasco_
1 year, 6 months ago
Selected Answer: C
C is correct A: may require a lot of effort in refactoring to lambda and different architecture B: may require a lot of effort in refactoring to containers/kubernetes and different architecture C: correct D: would be great to have a load balancer but the solution does not involve autoscaling so by itself does not satisfy the increase in demand. Also moving instances to private subnet may be not viable, depending on the app behaviour.
upvoted 1 times
...
cudbyanc
1 year, 6 months ago
Selected Answer: D
Option A and B suggest re-architecting the application, which may require significant development work and operational overhead. Option C adds complexity by requiring an additional Lambda function to update the Route 53 record.
upvoted 3 times
anita_student
1 year, 5 months ago
That's correct, but unfortunately D is not scaleable as it's missing the ASG
upvoted 1 times
...
...
hobokabobo
1 year, 6 months ago
Selected Answer: A
Why do they give a question with a set of answers that are all bad given the scenario. D: misses Autoscaler. It just does not do what the architect was asked to find a solution for. C: it simply does not work: changing DNS need to take TTL into account... B: adds overhead for Kubernetes. A: works but is ridiculous expensive and comes with operational effort to maintain lambda. So I guess the only possible option is A. Disclaimer: No one reasonable would use Lambda if it comes to high load. If the load justifies an EC2 let alone 5 EC instances. EC2 is way to go. Autoscaler, Loadbalancer. This is simple and simple means less operational overhead while complexity means operational overhead: Lambda adds complexity and is expensive when it comes to load (one invocation: cheap but not massive number of invocations).
upvoted 3 times
...
God_Is_Love
1 year, 6 months ago
Selected Answer: A
My Logical answer : After reading some discussion comments, my take - Least operation effort does not mean quick fix, its least work to maintain it. C is wrong , it seemed good reading first part but at the end it mentions wierd statement "lambda updating Route53 all the time when it reacts ? why updating DNS service every time? " D is not apt because, why would we put internet facing EC2 instances in private subnet? that adds additional overhead of maintaining NAT gateways /route tables etc. So serverless solution for least operational effort leaves A or B. I feel B is over provisioning with ECS/EKS clustering because it looks like a low/medium scale app with just 5 ec2 instances. I'd go with A as best answer.
upvoted 3 times
...
God_Is_Love
1 year, 6 months ago
My Logical answer : After reading some discussion comments, my take - Least operation effort does not mean quick fix, its least work to maintain it. C is wrong , it seemed good reading first part but at the end it mentions wierd statement "lambda updating Route53 all the time when it reacts ? why updating DNS service every time? " D is not apt because, why would we put internet facing EC2 instances in private subnet? that adds additional overhead of maintaining NAT gateways /route tables etc. So serverless solution for least operational effort leaves A or B. I feel B is over provisioning with ECS/EKS clustering because it looks like a low/medium scale app with just 5 ec2 instances. I'd go with A as best answer.
upvoted 1 times
...
kiran15789
1 year, 6 months ago
Selected Answer: C
C based on minimal operational overhead
upvoted 1 times
kiran15789
1 year, 5 months ago
Decided to update my answer to D
upvoted 1 times
...
...
spd
1 year, 6 months ago
Selected Answer: A
API Gateway is the option
upvoted 1 times
...
tinyflame
1 year, 6 months ago
Selected Answer: A
No C Because max 8 EC2s on Route53 multivalue answers
upvoted 3 times
...
DWsk
1 year, 6 months ago
Selected Answer: A
I know this question is gonna be a controversial one. The real issue is what the mean of LEAST OPERATIONAL OVERHEAD means. It could mean the least amount of work to set up initially, in which case the answer is definitely C. Converting a monolithic application to lambda is not a simple task. But in operation overhead means how much work it would take to maintain, the answer is definitely A because serverless has a lot less effort once its operational. Personally, I would go with A on this question. I've been taking these cert exams for a while now and I get a sense that AWS wants you to use serverless. Additionally, not quite sure what it means in C to have the lambda update Route 53...
upvoted 5 times
...
oatif
1 year, 6 months ago
Selected Answer: C
The answer is C, no idea, why ppl are voting for A. C requires the minimum amount of effort.
upvoted 1 times
oatif
1 year, 6 months ago
operational overhead means less effort in the long run, so i would change my answer to A.
upvoted 1 times
...
...
zozza2023
1 year, 6 months ago
why not C?
upvoted 2 times
...
viddkr
1 year, 7 months ago
Selected Answer: A
Question on 23-Jan-2023, selected A
upvoted 2 times
...
masetromain
1 year, 7 months ago
Selected Answer: A
Option A is good because it separates the API into individual AWS Lambda functions, which allows for automatic scaling of the backend based on the traffic it receives. Additionally, it also allows for more fine-grained scaling of specific parts of the API that may be receiving more traffic than others. By configuring an Amazon API Gateway REST API with Lambda integration, you can also benefit from features such as caching, monitoring, and security. Finally, by updating the Route 53 record to point to the API Gateway API, you can ensure that mobile clients are directed to the correct endpoint. This solution will have the least operational overhead, as it allows for automatic scaling and offloads many of the operational responsibilities to the managed services provided by AWS.
upvoted 1 times
tatdatpham
1 year, 6 months ago
I think the answer is C, you forgot that the application is monolithic. You need a lot of effort to migrate app to lambda function.
upvoted 3 times
...
...
adit
1 year, 7 months ago
Selected Answer: C
C - least operational effort from existing setup. A - Operational effort is high B - Containerize - Operational effort is high D - ALB and private subnet - Operational effort is high
upvoted 4 times
...
masetromain
1 year, 8 months ago
Selected Answer: D
I go with D
upvoted 2 times
zhangyu20000
1 year, 8 months ago
D does not have ASG, it cannot scale out
upvoted 3 times
masetromain
1 year, 7 months ago
Correct, option D does not include the use of an Auto Scaling group, which would be necessary for the API to automatically scale based on traffic. This would increase the operational overhead as manual scaling actions would need to be taken to handle the increased traffic. Option A or B would be better in this case as they both include automated scaling capabilities.
upvoted 1 times
...
...
...
Question #34 Topic 1

A company has created an OU in AWS Organizations for each of its engineering teams. Each OU owns multiple AWS accounts. The organization has hundreds of AWS accounts.
A solutions architect must design a solution so that each OU can view a breakdown of usage costs across its AWS accounts.
Which solution meets these requirements?

  • A. Create an AWS Cost and Usage Report (CUR) for each OU by using AWS Resource Access Manager. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
  • B. Create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
  • C. Create an AWS Cost and Usage Report (CUR) in each AWS Organizations member account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
  • D. Create an AWS Cost and Usage Report (CUR) by using AWS Systems Manager. Allow each team to visualize the CUR through Systems Manager OpsCenter dashboards.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (90%)
7%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: B
B is the correct answer. The solution would be to create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account. This would allow the management account to view the usage costs across all the member accounts, and the teams can visualize the CUR through an Amazon QuickSight dashboard. This allows the organization to have a centralized place to view the cost breakdown and the teams to access the cost breakdown in an easy way.
upvoted 18 times
...
TonytheTiger
Most Recent 3 months, 3 weeks ago
Selected Answer: C
Option C: I hate this questions because you have 2 correct answers however only ONE real correct answer. I have to read the question like 20x until I understood it, the questions is asking for " solution so the EACH OU can VIEW a breakdown of usage across ITS account". Its only asking for each OU breakdown for its members can see the usage cost and NOT the organization. Prior to Dec 2020 Option B would be correct however after its Option C: Read the following AWS Update - https://aws.amazon.com/about-aws/whats-new/2020/12/cost-and-usage-report-now-available-to-member-linked-accounts/?pg=ln&sec=uc
upvoted 2 times
...
gofavad926
5 months, 1 week ago
Selected Answer: B
B. Create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
upvoted 1 times
...
Rajarshi
6 months ago
C As target is to design a solution so that each OU can view a breakdown of usage costs across its AWS accounts
upvoted 1 times
...
acordovam
6 months, 2 weeks ago
Selected Answer: A
The question specifies that each OU should only view their own AWS accounts, not all accounts in the organization. While creating the solution in the management account might offer a centralized approach, it violates this crucial requirement.
upvoted 1 times
acordovam
6 months, 2 weeks ago
Sorry, I'm wrong, RAM can't create a Cost Report.
upvoted 3 times
...
...
abeb
9 months ago
B From management account of each account
upvoted 1 times
...
daz2023
10 months, 4 weeks ago
AWS Resource Access Manager has nothing to do with creating CUR. Answer B is correct. Use AWS Organization management account
upvoted 1 times
...
duriselvan
1 year ago
https://aws.amazon.com/blogs/mt/visualize-and-gain-insights-into-your-aws-cost-and-usage-with-cloud-intelligence-dashboards-using-amazon-quicksight/
upvoted 1 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: B
B by elimination
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: B
B As AWS Organizations Management account is only correct option
upvoted 1 times
...
leehjworking
1 year, 4 months ago
Can anyone explain why A is wrong? Thank you.
upvoted 1 times
scuzzy2010
1 year, 3 months ago
AWS Resource Access Manager has nothing to do with creating CURs. It's for sharing resources with other accounts.
upvoted 4 times
...
...
mfsec
1 year, 5 months ago
Selected Answer: B
B. Create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account.
upvoted 2 times
...
masetromain
1 year, 8 months ago
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/71951-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 3 times
...
Question #35 Topic 1

A company is storing data on premises on a Windows file server. The company produces 5 GB of new data daily.
The company migrated part of its Windows-based workload to AWS and needs the data to be available on a file system in the cloud. The company already has established an AWS Direct Connect connection between the on-premises network and AWS.
Which data migration strategy should the company use?

  • A. Use the file gateway option in AWS Storage Gateway to replace the existing Windows file server, and point the existing file share to the new file gateway.
  • B. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx.
  • C. Use AWS Data Pipeline to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS).
  • D. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS).
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (61%)
A (39%)

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: B
B. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx. D. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS) are also valid options. They both use DataSync to schedule a daily task to replicate the data between on-premises and cloud, the main difference is the type of file system in the cloud, Amazon FSx or Amazon Elastic File System (Amazon EFS).
upvoted 14 times
rbm2023
1 year, 3 months ago
EFS only support Linux FS. this is why we need to go for FSx . option B
upvoted 21 times
Karamen
1 year ago
thanks for this explaination. > EFS only support Linux FS. this is why we need to go for FSx . option B
upvoted 1 times
...
...
...
victorHugo
Highly Voted 12 months ago
Selected Answer: A
For an and b we need FSx. Data Sync is useful for a batch and is able to process large data volumes. in (a) the data is also accessible from on prem. The data volume is quite small (5 GB) per day therefore (a) is feasible. In my opinion, the key requirement is "data to be available on a file system in the cloud" and ",, migrating workloads" and I think this includes that it can be accessed from servers on prem. In addition (a) replaces only a Windows File server and not the overall windows landscape in AWS. There I vote for (a), AWS Data Sync. See https://tutorialsdojo.com/aws-datasync-vs-storage-gateway/ for a comparison
upvoted 12 times
swadeey
9 months ago
Correct point here is migration not daily sync and replication.
upvoted 3 times
...
vn_thanhtung
11 months, 3 weeks ago
needs the data to be available on a file system in the cloud
upvoted 2 times
...
...
8693a49
Most Recent 3 weeks, 6 days ago
Selected Answer: A
Because part of the workloads have already been migrated we need a solution that keeps the data consistent between on prem and the cloud. With DataSync files stored by systems on-prem would be visible in the cloud only the following day. This could cause data inconsistencies and business disruption. The best solution is to use a file gateway to maintain files synchronised at all times. 5GBs/day is easily transferable over DX
upvoted 2 times
...
gfhbox0083
1 month, 2 weeks ago
B, for sure. Needs the data to be available on a file system in the cloud.
upvoted 1 times
...
mifune
4 months, 1 week ago
Selected Answer: B
Windows file server -> FSx (cristal clear)
upvoted 1 times
...
Vongolatt
4 months, 3 weeks ago
Selected Answer: B
A is not the data migration
upvoted 2 times
...
mav3r1ck
5 months ago
Selected Answer: B
option B is the most suitable data migration strategy for the company. It leverages AWS DataSync to automate the replication of daily data increments from the on-premises Windows file server to Amazon FSx for Windows File Server. This approach provides a seamless integration for Windows-based workloads with minimal disruption and supports the company's needs for a cloud-native file system that is fully managed and integrates well with AWS services.
upvoted 1 times
8693a49
3 weeks, 6 days ago
Batch sync is not seamles and might not be with minimal disruption depending on how it's used. Is this generated with ChatGPT?
upvoted 1 times
...
...
mav3r1ck
5 months ago
Selected Answer: B
This option is particularly suitable for the company's requirements because it allows for scheduled daily tasks to efficiently replicate the 5 GB of new data to Amazon FSx, providing a cloud-native file system that integrates well with Windows-based workloads.
upvoted 2 times
...
gofavad926
5 months, 1 week ago
Selected Answer: B
B is the answer
upvoted 1 times
...
a54b16f
5 months, 3 weeks ago
Selected Answer: B
B is right, but, I wish they change "FSx" to "FSx for windows file server"
upvoted 1 times
...
Dgix
5 months, 3 weeks ago
The key here is the word "migration". This suggests DataSync. If the objective was to set up a permanent hybrid solution, then AWS Storage Gateway would be the solution. Again an example where the entire question hinges on one single word.
upvoted 1 times
8693a49
3 weeks, 6 days ago
My thinking is that a file gateway would be better in a migration where applications are moved in batches because it provides a drop-in replacement. Batch sync could break some of the apps that have not yet migrated.
upvoted 1 times
...
...
djeong95
6 months ago
Selected Answer: B
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/migrate-files-fsx.html
upvoted 1 times
...
8608f25
6 months, 2 weeks ago
Selected Answer: B
The most appropriate data migration strategy for the company, considering the need for the data to be available on a file system in the cloud and the existing AWS Direct Connect connection, is: * B. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx. Option B is the best choice because AWS DataSync is a data transfer service designed to make it easy to move large amounts of data online between on-premises storage systems and AWS storage services. Amazon FSx provides fully managed Windows file servers in the cloud, offering native Windows file system capabilities, making it an ideal target for Windows-based workloads that the company has migrated to AWS. Using DataSync to automate the daily replication of data ensures the new data produced is consistently available in the cloud with minimal manual effort.
upvoted 1 times
...
Vaibs099
6 months, 4 weeks ago
company migrated part of its Windows-based workload to AWS and needs the data to be available on a file system in the cloud. -> This line is key here, they have already moved part of Windows workload and require data to be available on another file system in the cloud. This is possible by Data Sync migrating data to FSx for Windows Server (SMB supporting file server). File Gateway - would for connection to S3 and hardware compliance locally will give illusion of File Server. This is good for DR, Backup and migration to cheaper storage. But doesn't solve the purpose of moving data and creating another file share in Cloud.
upvoted 1 times
...
tmlong18
7 months, 2 weeks ago
Selected Answer: A
B incorrect. Since you created a DX between AWS and on-premises, you can mount FSx in your local server directly. It doesn’t make sense to schedule a daily task.
upvoted 1 times
e4bc18e
5 months, 3 weeks ago
This is wrong look at what the question actually asks, it says what is a proper MIGRATION strategy, File Gateway only lets you access data but does not migrate data from on premises.
upvoted 1 times
...
...
0c118eb
8 months, 1 week ago
Selected Answer: B
Anyone saying A has never used file gateway before. You can't "point the existing file share to the new file gateway". That's not how file gateways work.
upvoted 2 times
...
ninomfr64
8 months, 1 week ago
Selected Answer: A
Windows workload thus C and D are ruled out (EFS is for NFS only). B is precisely stated pointing to FSx for Windows, while A to work we need to imply we are using FSx for Windows File Gateway which is not (clearly stated). Assuming it is FSx for Windows File Gateway, A is more versatile as is quicker in synching data (B once a day)
upvoted 3 times
...
shaaam80
8 months, 3 weeks ago
Selected Answer: B
Answer B. Company is looking for a migration strategy. With AWS Direct Connect in place, Datasync replication is the way to go. 5GB of new data is would be replicated in no time.
upvoted 2 times
...
swadeey
9 months ago
Aren't we talking about migration not sync. Migration means move data and use on solution. So if we use option B it says schedule daily task to replicate. That means we have on premise working and then replication to cloud. Shouldn't migration means migrate to one file system that is either cloud gateway or keep on server
upvoted 2 times
...
trap
9 months ago
Correct: A https://aws.amazon.com/storagegateway/file/fsx/
upvoted 2 times
trap
9 months ago
https://docs.aws.amazon.com/filegateway/latest/filefsxw/what-is-file-fsxw.html https://docs.aws.amazon.com/filegateway/latest/filefsxw/file-gateway-fsx-concepts.html
upvoted 2 times
...
...
BECAUSE
9 months, 1 week ago
Selected Answer: B
B is the answer
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: B
obvious
upvoted 1 times
...
covabix879
11 months ago
Selected Answer: A
File Gateway is a better option considering company already have direct connect. It will synchronize file with cloud continuously rather than daily.
upvoted 2 times
grire974
7 months, 2 weeks ago
agreed; but the answer doesn't mention moving the data to the cloud. so it would just be an empty gateway.
upvoted 2 times
...
...
KungLjao
11 months ago
Selected Answer: A
Has to be A since daily sync job wont make the data available immediately
upvoted 2 times
...
Simon523
11 months, 3 weeks ago
Selected Answer: B
the question is required the data can be access by both on-premises and on-cloud windows server (migrated part of its Windows-based workload), so A is wrong.
upvoted 2 times
daz2023
10 months, 4 weeks ago
It doesn't say access by both on-prem and on-cloud is required.
upvoted 2 times
...
...
aviathor
12 months ago
Selected Answer: A
1) Any answer mentioning EFS is out since EFS is for Linux only. 2) We are now left with DataSync vs File Gateway. The difference is that DataSync is batch-oriented, meaning that data will be out of sync between on-premise and cloud in between 2 synchronisation jobs. File Gateway for FSx on the other hand will synchronise continuously. I would chose A because that is the most "versatile" option, allowing access to the data from AWS as well as from on-premise.
upvoted 2 times
vn_thanhtung
11 months, 3 weeks ago
needs the data to be available on a file system in the cloud. So A?
upvoted 2 times
...
...
CloudHandsOn
1 year ago
Selected Answer: B
B. To decide between B and A for me was in the last sentence of the question "Which migration strategy..". Best migration strategy is AWS DataSync for this use case.
upvoted 1 times
...
chico2023
1 year ago
Selected Answer: B
Answer: B The company is migrating part of their Windows-based workload that taps into a Windows file server. This eliminates C and D right away. A seems incorrect. It mentions the File Gateway option in AWS Storage Gateway, BUT, this File Gateway has to connect to something, like a FSx share or an S3 bucket. It doesn't specify it. Not to mention that it seems they are not looking for a way for the whole company to tap from the cloud (even with it being cached on-prem), they seem to only want "the data to be available on a file system in the cloud" for " part of their Windows-based workload" in AWS. Due to that, B is the most correct option in my opinion.
upvoted 4 times
...
aviathor
1 year, 1 month ago
Selected Answer: A
Amazon FSx File Gateway optimizes on-premises access to fully managed, highly reliable file shares in Amazon FSx for Windows File Server. Customers with unstructured or file data, whether from SMB-based group shares, or business applications, may require on-premises access to meet low-latency requirements. Amazon FSx File Gateway helps accelerate your file-based storage migration to the cloud to enable faster performance, improved data protection, and reduced cost.
upvoted 2 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: B
B 1 - windows -> FSx 2 - a would've be an option if mentioned 1st migration to s3
upvoted 2 times
...
hglopes
1 year, 2 months ago
Selected Answer: A
A works towards full migration and allows migrated workloads to use fully up to date data at any point and not just a daily sync which might not be enough
upvoted 2 times
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: B
keyword = migration strategy then B
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: B
B as AWS FSx support Windows Files system can also be mounted as External Drive
upvoted 2 times
...
Sarutobi
1 year, 4 months ago
"The company migrated part of its Windows-based workload to AWS" so those Ec2 windows now need access to that data; I believe FSx is the best way. Option A, using storage gateway the data ends on S3 or... FSx. DataSync is also a great utility when teaming up with DX.
upvoted 1 times
...
Sin_ha
1 year, 4 months ago
Since the company needs the data to be available on a file system in the cloud, the best option is to use Amazon FSx for Windows File Server to store and access the data. Therefore, option B is the correct choice, and the company should use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx.
upvoted 1 times
aviathor
1 year, 1 month ago
The problem I have with B is that it talks about a DAILY task. So the workload running on prem and in the cloud may be up to 24 hours out of sync.
upvoted 1 times
...
...
takecoffe
1 year, 4 months ago
I will go with A .. they are talking about migration to cloud. not a hybrid solution. Which data migration strategy should the company use?
upvoted 2 times
OnePunchExam
1 year, 4 months ago
Data migration is simply moving data from A to B, it doesn't mean it is a one-off thing like as part of cloud migration workload strategy. Answer is B.
upvoted 1 times
...
...
mfsec
1 year, 5 months ago
Selected Answer: B
B is the right answer.
upvoted 2 times
...
testingaws123
1 year, 5 months ago
Selected Answer: A
The company migrated part of its Windows-based workload to AWS and needs the data to be available on a file system in the cloud. Here It is open to discussion. Do they want to migrate the entire data to the cloud or do they just want data to be available in the cloud. It sound like data will sync to the cloud and remain active on prem. Which leads to option A.
upvoted 4 times
...
zejou1
1 year, 5 months ago
Selected Answer: B
https://docs.aws.amazon.com/efs/latest/ug/trnsfr-data-using-datasync.html
upvoted 1 times
...
_lasco_
1 year, 6 months ago
Selected Answer: B
B I was in doubt between B and D, but EFS does not support windows for mounting: https://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html
upvoted 2 times
...
moota
1 year, 6 months ago
Selected Answer: A
I am curious if Amazon FSx File Gateway from Azure Storage Gateway (https://aws.amazon.com/storagegateway/file/) can address this.
upvoted 2 times
...
oatif
1 year, 6 months ago
Selected Answer: B
My initial thought was A, but the solution requires data to be available in the cloud, not to replace a Windows File server with a Cloud-based sol'n-like storage gateway. So B is correct.
upvoted 3 times
vvahe
1 year, 5 months ago
Correct, it says "The company migrated part of its Windows-based workload to AWS" so there is still some workload onpremise, this is not about data also workloads, so A is incorrect as smiply replacing the existing windows file server is not an option. Also DataSync work with Direct Connect which the company already uses further giving a hint to B
upvoted 1 times
...
aviathor
1 year, 1 month ago
What bothers me about B is the DAILY synchronisation with a part of the workload remaining on-prem, and the rest on AWS.
upvoted 1 times
...
...
masetromain
1 year, 8 months ago
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/47620-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 4 times
...
Question #36 Topic 1

A company’s solutions architect is reviewing a web application that runs on AWS. The application references static assets in an Amazon S3 bucket in the us-east-1 Region. The company needs resiliency across multiple AWS Regions. The company already has created an S3 bucket in a second Region.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Configure the application to write each object to both S3 buckets. Set up an Amazon Route 53 public hosted zone with a record set by using a weighted routing policy for each S3 bucket. Configure the application to reference the objects by using the Route 53 DNS name.
  • B. Create an AWS Lambda function to copy objects from the S3 bucket in us-east-1 to the S3 bucket in the second Region. Invoke the Lambda function each time an object is written to the S3 bucket in us-east-1. Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins.
  • C. Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region. Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins.
  • D. Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region. If failover is required, update the application code to load S3 objects from the S3 bucket in the second Region.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (95%)
2%

zhangyu20000
Highly Voted 1 year, 8 months ago
C is correct. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html
upvoted 15 times
...
MAZIADI
Most Recent 2 weeks, 1 day ago
Selected Answer: C
Why not D ? D. Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region. If failover is required, update the application code to load S3 objects from the S3 bucket in the second Region: Manual Failover: This option involves manual updates to the application code in the event of a failover, which adds operational overhead and complexity. CloudFront provides automatic failover and load balancing, making it a more streamlined solution.
upvoted 1 times
...
sarlos
4 months, 1 week ago
C IS THE answer
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: C
C is correct
upvoted 1 times
...
VerRi
6 months ago
Selected Answer: C
Straightforward
upvoted 1 times
...
8608f25
6 months, 2 weeks ago
Selected Answer: C
Option C is the most efficient solution because it leverages S3’s built-in replication feature to automatically replicate objects to a second bucket in another Region, ensuring that the data is resiliently stored across multiple Regions. By using Amazon CloudFront with an origin group containing both S3 buckets, the application benefits from CloudFront’s global content delivery network, which improves load times and provides a built-in failover mechanism. This setup minimizes operational overhead while achieving the desired resiliency and performance improvements. Option C provides a seamless, automated solution for achieving resiliency across multiple AWS Regions with minimal operational effort, leveraging AWS services designed for replication, content delivery, and failover.
upvoted 1 times
...
Vaibs099
6 months, 4 weeks ago
C is correct because, You can server Dynamic Websites with Static Content with CDN by having origins for both and in your webserver app refer to DNS for s3 origin from CF to deliver static content. For webserver on EC2 (Custom Origins can be used). So in above scenario, if you would like to have resiliency. Add another S3 Origin with bucket in different region. Create Origin Group with both S3 Origins. Set priority on Origins and select 4XX and 5XX error codes for failover. You can use DNS returned for Origin Group from Cloud front in your web app and that would do automatic failover with least overheads. D also solves the purpose, but you will need to build failover mechanism in your app. However, with above Cloudfront Origin group is taking care of that for you.
upvoted 1 times
...
ninomfr64
8 months, 1 week ago
Selected Answer: C
All options does the job, but: A would require code maintenance and managing public hosted zone -> No B would require Lambda and CloudFront operations -> No C would require only CloudFront operations -> Yes D requires a lot of work for failover that appears to be manual -> No
upvoted 2 times
...
subbupro
8 months, 3 weeks ago
C is mostly correct, A is not correct - B and D required the code changes. C will take care of the cloud front orgin failover.
upvoted 1 times
...
abeb
9 months ago
C is good
upvoted 1 times
...
severlight
9 months, 2 weeks ago
Selected Answer: C
obvious
upvoted 1 times
...
totten
11 months ago
Selected Answer: C
Here's why Option C is the most suitable choice: Replication: Amazon S3 Cross-Region replication is designed to replicate objects from one S3 bucket to another in a different Region. This ensures data resiliency across Regions with minimal operational overhead. Once configured, replication happens automatically. CloudFront: Setting up an Amazon CloudFront distribution with an origin group containing the two S3 buckets allows you to use a single CloudFront distribution to serve content from both Regions. CloudFront provides low-latency access to your content, and using an origin group allows for failover if one of the S3 buckets becomes unavailable.
upvoted 4 times
totten
11 months ago
Option A suggests configuring the application to write each object to both S3 buckets, which can result in higher operational overhead and may not provide immediate failover capabilities. Option B involves creating a Lambda function to copy objects, which adds complexity and requires code maintenance for each object written to the S3 bucket in us-east-1. Option D relies on manual updates to the application code for failover, which is less automated and could result in higher operational overhead. Therefore, Option C is the most efficient and operationally streamlined solution to achieve data resiliency and availability across multiple AWS Regions.
upvoted 1 times
...
...
Simon523
11 months, 3 weeks ago
Selected Answer: C
C, LEAST operational overhead
upvoted 1 times
...
TWOCATS
12 months ago
Selected Answer: C
C should incur the least operational cost while D still requires the cx to update the code in whatever way they deem as appropriate
upvoted 1 times
...
Karamen
1 year ago
Selected Answer: C
upvoted 1 times
...
xplusfb
1 year ago
Selected Answer: C
Its completely asking CRR Right one is C
upvoted 1 times
...
Brightalw
1 year ago
Selected Answer: D
EB support .Net. and from question, it was ordered to move the app from on-premises to AWS. EB is more appropriated for this case.
upvoted 1 times
...
Jonalb
1 year, 1 month ago
Selected Answer: C
CCCCCCCCCCCCCC
upvoted 1 times
...
Jonalb
1 year, 1 month ago
Selected Answer: C
its a C correct answ...
upvoted 1 times
...
NikkyDicky
1 year, 1 month ago
C no doubt
upvoted 1 times
...
hglopes
1 year, 2 months ago
Selected Answer: A
With A you achieve better overall resiliency because if a region goes down you can still write to the other bucket and ensure all webapp features. Also does not require adding cloudfront if they don't use it already leading to less operational overhead. it may however decrease performance in writing to s3 writing and perhaps data consistency issues in the future
upvoted 1 times
...
Jonalb
1 year, 2 months ago
Selected Answer: C
Option C is the most suitable solution with the least operational overhead compared to option D because it leverages the built-in replication functionality of Amazon S3. In option C, by configuring replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region, the replication process is handled automatically by Amazon S3. This ensures that the static assets are consistently synchronized between the two regions without the need for manual intervention or custom code. On the other hand, option D suggests configuring replication on the S3 bucket in us-east-1 and updating the application code to load objects from the second Region in case of failover. While this option can achieve resiliency across multiple regions, it introduces additional complexity and operational overhead.
upvoted 2 times
...
AmalArul
1 year, 2 months ago
Selected Answer: C
C is the correct answer. More information at https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: C
C is the Only option as per the requirement
upvoted 1 times
...
rbm2023
1 year, 3 months ago
Selected Answer: C
C is the most suitable, because it will use both buckets as CF distribution
upvoted 1 times
...
Sin_ha
1 year, 4 months ago
The solution that will meet the requirements with the least operational overhead is to configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region and set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins. Therefore, the correct answer is C.
upvoted 3 times
...
mfsec
1 year, 5 months ago
Selected Answer: C
S3 + Cloudfront
upvoted 2 times
...
Cloud_noob
1 year, 5 months ago
Selected Answer: C
you can configure Amazon CloudFront to use two different Amazon S3 buckets from different regions as the origin for your content. To do this, you would need to create two separate Amazon S3 bucket origins in your CloudFront distribution settings, each one pointing to a different S3 bucket in a different region. When creating the CloudFront distribution, you can add multiple origins to the distribution configuration. You can specify the origin domain name for each origin, which will correspond to the domain name of the S3 bucket you want to use as the origin. You can also specify the origin protocol policy, which determines whether CloudFront uses HTTP or HTTPS to communicate with the origin. Keep in mind that you will need to configure cross-region replication between the two S3 buckets in order to keep the content in both buckets synchronized. Additionally, you will need to make sure that both S3 buckets are publicly accessible or that CloudFront has the appropriate permissions to access the buckets.
upvoted 2 times
...
jooncco
1 year, 6 months ago
Selected Answer: C
Modifying any existing application code IS a operational overhead.
upvoted 3 times
...
ask4cloud
1 year, 7 months ago
Selected Answer: C
This solution will meet the requirements with the least operational overhead as it allows the company to use Amazon CloudFront to automatically distribute the static assets of the web application across multiple regions, and if the primary S3 bucket in us-east-1 becomes unavailable, CloudFront will automatically route the traffic to the secondary S3 bucket in the second region. This solution eliminates the need for additional Lambda function or updating the application code for failover.
upvoted 4 times
...
masetromain
1 year, 7 months ago
Selected Answer: C
C. Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region. Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins. This option provides automatic replication of objects across the two S3 buckets, and CloudFront automatically routes requests to the nearest origin, providing low latency and high availability for the application. This solution requires minimal operational overhead to maintain as the replication and failover is handled automatically by S3 and CloudFront.
upvoted 3 times
...
VVish
1 year, 7 months ago
C - LEAST operational overhead
upvoted 2 times
...
aimik
1 year, 8 months ago
Selected Answer: C
involves updating the application code to load S3 objects from the second region in case of a failover, which is not necessary if you are using CloudFront with an origin group as in option C.
upvoted 3 times
...
masetromain
1 year, 8 months ago
Selected Answer: C
Answer C
upvoted 4 times
...
Question #37 Topic 1

A company is hosting a three-tier web application in an on-premises environment. Due to a recent surge in traffic that resulted in downtime and a significant financial impact, company management has ordered that the application be moved to AWS. The application is written in .NET and has a dependency on a MySQL database. A solutions architect must design a scalable and highly available solution to meet the demand of 200,000 daily users.
Which steps should the solutions architect take to design an appropriate solution?

  • A. Use AWS Elastic Beanstalk to create a new application with a web server environment and an Amazon RDS MySQL Multi-AZ DB instance. The environment should launch a Network Load Balancer (NLB) in front of an Amazon EC2 Auto Scaling group in multiple Availability Zones. Use an Amazon Route 53 alias record to route traffic from the company’s domain to the NLB.
  • B. Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front of an Amazon EC2 Auto Scaling group spanning three Availability Zones. The stack should launch a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a Retain deletion policy. Use an Amazon Route 53 alias record to route traffic from the company’s domain to the ALB.
  • C. Use AWS Elastic Beanstalk to create an automatically scaling web server environment that spans two separate Regions with an Application Load Balancer (ALB) in each Region. Create a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a cross-Region read replica. Use Amazon Route 53 with a geoproximity routing policy to route traffic between the two Regions.
  • D. Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front of an Amazon ECS cluster of Spot instances spanning three Availability Zones. The stack should launch an Amazon RDS MySQL DB instance with a Snapshot deletion policy. Use an Amazon Route 53 alias record to route traffic from the company’s domain to the ALB.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
B (91%)
6%

robertohyena
Highly Voted 1 year, 8 months ago
Selected Answer: B
Agree with B. Not A: we will not use NLB for web app Not C: Beanstalk is region service. It CANNOT "automatically scaling web server environment that spans two separate Regions" Not D: spot instances cant meet 'highly available'
upvoted 26 times
kz407
5 months, 1 week ago
I don't think ASGs are cross-region either. This answer in SO gives a serious perspective on this regard. https://stackoverflow.com/a/12907101/3126973
upvoted 1 times
...
masetromain
1 year, 7 months ago
That's correct, option C is not a valid solution because AWS Elastic Beanstalk is a region-specific service, it cannot span multiple regions. Option B is a valid solution that uses CloudFormation to launch a stack with an Application Load Balancer in front of an Auto Scaling group, a Multi-AZ Aurora MySQL cluster and Route 53 to route traffic to the load balancer, it meets the requirements of scalability and high availability with a good performance and with less operational overhead.
upvoted 6 times
Perkuns
1 year, 2 months ago
if I am not mistaken you can deploy the same EB to a different region. why does that eliminate C? it further increases your availability with geolocation weighted routing, as well as you having DR which even further increases availability along with low RPO and RTO
upvoted 5 times
jpa8300
7 months, 4 weeks ago
I agree with you, that's the best option, two EBs, one in each region to deploy, manage and monitor all the environment.
upvoted 1 times
...
...
...
...
masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: B
B is correct. The solution architect should use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front of an Amazon EC2 Auto Scaling group spanning three Availability Zones. The stack should launch a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a Retain deletion policy. Use an Amazon Route 53 alias record to route traffic from the company's domain to the ALB. This solution provides scalability and high availability for the web application by using an Application Load Balancer and an Auto Scaling group in multiple availability zones, which can automatically scale in and out based on traffic demand. The use of a Multi-AZ Amazon Aurora MySQL DB cluster provides high availability for the database layer and the Retain deletion policy ensures the data is retained even if the DB instance is deleted. Additionally, the use of Route 53 with an alias record ensures traffic is routed to the correct location.
upvoted 8 times
...
gfhbox0083
Most Recent 1 month, 1 week ago
Selected Answer: B
B, for sure. Elastic Beanstalk is region specific. The "Retain" deletion policy in AWS Aurora ensures that when you delete a database cluster, the automated backups and snapshots of the cluster are retained. This means that even though the database cluster itself is deleted, the backups and snapshots remain, allowing you to restore the cluster from those backups at a later time.
upvoted 2 times
...
gfhbox0083
1 month, 2 weeks ago
B, for sure. Elastic Beanstalk environments are typically created within a single AWS region.
upvoted 1 times
...
TonytheTiger
4 months, 3 weeks ago
Selected Answer: C
Option C: The only AWS documentation I found that support .NET application migration is for Elastic Beanstalk, it said " EB is the fastest and simplest way to deploy .NET applications on AWS" Many suggestion is selection option "B", the question is not asking about cost or least operational overhead, just scalable and highly available for the migration for a .NET application. Also, I can see why so many people are selecting option "B". https://docs.aws.amazon.com/whitepapers/latest/develop-deploy-dotnet-apps-on-aws/aws-elastic-beanstalk.html https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.concepts.design.html
upvoted 2 times
...
kz407
5 months, 1 week ago
Selected Answer: B
B however is not a highly available solution IMO because it is restricted to a region. By any chance if the region goes down, the webapp goes down as well. A is out of the picture because it involves an NLB. D is out of the picture because it involves spot instances which is not the choice for HA requirements. C, everything is good except the mention of "Elastic Beanstalk environment that spans across regions". This is wrong. EB environments are a region construct. You can's have them spanning cross region. You can however have EB in multiple regions.
upvoted 1 times
...
bjexamprep
7 months, 1 week ago
Selected Answer: B
Guessing the question designer prefers B. But it is wrong. When talking about R53 Alias record, it is wrong. Cause Alias record points to IP address while ALB endpoint is not an IP address. A has flaw. The question says 3-tier web application. AWS question designers often mess up the definition of 3-tier application, which means there isn’t a very clear definition of 3 tier: browser/application server/database is one definition, another one is WebServer/Application Server/database. Looks like A means the latter. Then, if the Elastic Beanstalk is hosting a web server, what are the ASG hosting? And why the R53 is pointing to the NLB which is pointing to the ASG? C is wrong, cause Elastic Beanstalk cannot span regions. D is wrong because spot instance is not HA. Weighting the flaws of different answers, B has the least flaw.
upvoted 1 times
...
ninomfr64
8 months, 1 week ago
Selected Answer: B
Not C as we do not need to span multiple Region (DR, global reach, ...), also cross-Region read replica does not fail-over automatically (you need to promote it to primary). Finally from the wording it seems that this imply having a single environment that spans two separate Regions which is not supported (you need two separate environments) Not D as we have a single RDS DB instance, no HA Both A and B does the job, but B provides better scalability as it make use of Aurora Multi-AZ that allows secondary (reader) instance(s) to be accessed for reads, while RDS Multi-AZ instance does not allow standby instance endpoint to be accessed. This could be circumvented by using Multi-AZ DB cluster deployment that provides 2 readable standby instance
upvoted 1 times
...
ayadmawla
8 months, 2 weeks ago
Selected Answer: C
Answer is C The best way to migrate a .NET application to AWS is via Beanstalk (see: https://docs.aws.amazon.com/whitepapers/latest/develop-deploy-dotnet-apps-on-aws/aws-elastic-beanstalk.html) I think that the question regarding spanning a deployment across two regions has triggered some to reject based on the multi-region but if you continue you will notice the separate regional deployments based on two ALBs etc. Just my two pennie :)
upvoted 1 times
...
subbupro
8 months, 3 weeks ago
B is the correct,
upvoted 1 times
...
shaaam80
8 months, 3 weeks ago
Selected Answer: B
Answer B
upvoted 1 times
...
abeb
9 months ago
B is good
upvoted 1 times
...
totten
11 months ago
Selected Answer: B
Here's why Option B is the best choice: High Availability: The use of an Application Load Balancer (ALB) and Amazon Aurora Multi-AZ deployment ensures high availability and fault tolerance for the web application and the MySQL database. The Multi-AZ setup for Aurora provides automatic failover. Scalability: Using an EC2 Auto Scaling group across multiple Availability Zones allows the application to automatically scale to meet traffic demands. This is crucial for handling the surge in traffic from 200,000 daily users. Deletion Policy: The Retain deletion policy for the Aurora MySQL DB cluster ensures that even if the CloudFormation stack is deleted, the database is retained, which is important for data preservation and recovery. Route 53 Routing: Route 53 with an alias record provides efficient DNS routing, directing traffic to the ALB, which then distributes it to the EC2 instances. This ensures that users can access the application reliably.
upvoted 1 times
totten
11 months ago
Option C introduces unnecessary complexity by spanning two separate Regions and using geoproximity routing. This is typically used for disaster recovery and global deployments, which may not be necessary here.
upvoted 1 times
...
...
Simon523
11 months, 3 weeks ago
Selected Answer: B
The question required to “design a scalable and highly available solution”. Cause the different between Beanstalk and CloudFormation is, Beanstalk is PaaS (platform as a service) while CloudFormation is IaC (infrastructure as code). So I go for Answer B, as it is related to infrastructure.
upvoted 1 times
...
victorHugo
12 months ago
Selected Answer: C
"web server environment" doesn't require a single instance to spawns multiple regions, multiple AWS Beanstalks for each region are also feasible. With geoproximity routing it is guaranteed the requests are routed to the same region. In addition the requirement is "highly available", which can be achieve with a multi region architecture
upvoted 1 times
...
aviathor
12 months ago
Selected Answer: B
A. I do not quite understand the choice of NLB for this, but Multi-AZ DB instance, EC2 auto-scaling in multiple AZ sure sounds good. C. Elastic Beanstalk does not "span multiple regions". Geoproximity routing does not sound right for a disaster recovery scenario. B. I like CloudFormation, and I like the Retain deletion policy. In order to switch to the other region, one will need to update the Route 53 alias... D. I do not like the Snapshot deletion policy... The DB is not Multi-AZ, nor has a read-replica in the fail-over region. Spot instance is not great for HA.
upvoted 1 times
...
chico2023
1 year ago
Selected Answer: B
C is incorrect. If it wasn't "to create an automatically scaling web server environment that spans two separate Regions" I would also go with that.
upvoted 1 times
...
Jonalb
1 year, 1 month ago
Selected Answer: B
bbbbbbbbbbbbb
upvoted 1 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: B
Its a B
upvoted 1 times
...
Limlimwdwd
1 year, 3 months ago
Selected Answer: B
Qn didnt mention there is a need for DR hence a HA within a region will suffice. NLB is also not required. This leave B & D. B is the best choice as spot instance is not desired.
upvoted 1 times
...
rbm2023
1 year, 3 months ago
Selected Answer: B
I removed the Beanstalk due to the use case. Between the cloud formation options one of them mentions the retention policy, which removes option D. You want to keep the DB in case the stack is destroyed.
upvoted 1 times
...
devopsy
1 year, 4 months ago
B because high scalability and availability can be achieved using multi AZ. Multi Region is a not required unless question mentions global audience.
upvoted 1 times
...
Sin_ha
1 year, 4 months ago
Option B's use of Aurora MySQL may be a better option due to its scalability and high availability, which will help in minimizing downtime. So, the correct answer is Option B.
upvoted 1 times
...
frfavoreto
1 year, 4 months ago
Selected Answer: B
Both 'A' and 'B' are technically functional, however 'B' is more convenient because Aurora, instead of RDS. Aurora has much more scalability as a serverless DB service, in contrast to RDS which is more rigid in this aspect.
upvoted 1 times
...
soujora2
1 year, 4 months ago
I have a question. The question has the following wording. "company management has ordered that the application be moved to AWS" Looking at the answers, it seems that they do not consider moving the application. There is no moving part of the application in the answers, so why is the answer "B"?
upvoted 1 times
OnePunchExam
1 year, 4 months ago
The question is "Which steps should the solutions architect take to design an appropriate solution?". It is not asking for the full and complete steps, so as long the answer is part of a bigger picture, it can suffice. But actually Ans B does address the cloud 3 tier environment setup: - web tier is ALB - app tier is EC2 in ASG for hosting workloads - db tier is the aurora
upvoted 1 times
...
...
mfsec
1 year, 5 months ago
Selected Answer: B
B is the answer
upvoted 2 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: B
B makes sense to me ✅
upvoted 2 times
...
zejou1
1 year, 5 months ago
Selected Answer: B
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.concepts.design.html AWS EB does support .NET and MySQL; the difference now is that it is not supported separate regions
upvoted 1 times
...
spd
1 year, 6 months ago
Selected Answer: B
ALB and Rou53 Alias
upvoted 1 times
...
zozza2023
1 year, 6 months ago
Selected Answer: B
Answer is B
upvoted 2 times
...
lunt
1 year, 7 months ago
Selected Answer: A
Answer is A. B. R53 alias record? C. No requirement for multi-region. Just HA. D. Spot instance not HA. A. Yes. NLB fine, EC2 ASG fine, R53 alias to NLB EIP fine. Question does not mention regions, NLB can work with websites - yes ALB is the better option but NLB works perfectly fine for HTTP/HTTPS traffic.
upvoted 2 times
bcx
1 year, 2 months ago
An alias record is exactly what you need for that case to point to the ALB. So that makes B the correct answer.
upvoted 1 times
...
...
masetromain
1 year, 8 months ago
Selected Answer: B
for me the answer is B
upvoted 1 times
masetromain
1 year, 8 months ago
https://www.examtopics.com/discussions/amazon/view/28502-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 1 times
...
...
zhangyu20000
1 year, 8 months ago
Answer is B. AC not correct because Beanstalk does not support .NET D user spot instance that is not reliable
upvoted 1 times
EricZhang
1 year, 8 months ago
Beanstalk does support .NET https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_NET.container.console.html
upvoted 5 times
...
bcx
1 year, 2 months ago
C is wrong because an environment in Elastic Beanstalk cannot span more than one region.
upvoted 1 times
...
...
Question #38 Topic 1

A company is using AWS Organizations to manage multiple AWS accounts. For security purposes, the company requires the creation of an Amazon Simple Notification Service (Amazon SNS) topic that enables integration with a third-party alerting system in all the Organizations member accounts.
A solutions architect used an AWS CloudFormation template to create the SNS topic and stack sets to automate the deployment of CloudFormation stacks. Trusted access has been enabled in Organizations.
What should the solutions architect do to deploy the CloudFormation StackSets in all AWS accounts?

  • A. Create a stack set in the Organizations member accounts. Use service-managed permissions. Set deployment options to deploy to an organization. Use CloudFormation StackSets drift detection.
  • B. Create stacks in the Organizations member accounts. Use self-service permissions. Set deployment options to deploy to an organization. Enable the CloudFormation StackSets automatic deployment.
  • C. Create a stack set in the Organizations management account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets automatic deployment.
  • D. Create stacks in the Organizations management account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets drift detection.
Reveal Solution Hide Solution

Correct Answer: C -
🗳️

Community vote distribution
C (100%)

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: C
The best solution is C, because it involves creating the stack set in the management account of the organization, which is the central point of control for all the member accounts. This allows the solutions architect to manage the deployment of the stack set across all member accounts from a single location. Service-managed permissions are used, which allows the CloudFormation service to deploy the stack set to all member accounts. The deployment options are set to deploy to the organization and automatic deployment is enabled, which ensures that the stack set is automatically deployed to all member accounts as soon as it is created in the management account.
upvoted 19 times
...
masetromain
Highly Voted 1 year, 8 months ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/47723-exam-aws-certified-solutions-architect-professional-topic-1/
upvoted 5 times
...
Vaibs099
Most Recent 6 months, 4 weeks ago
C. Create a stack set in the Organizations management account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets automatic deployment. C is more suitable as Enable CloudFormation StackSets automatic deployment will take care of any new account in the Org. Set deployment options to deploy to the organization helps deploying Stack Instances to targeted account in Org. Use service-managed permissions is hassle free as it takes care or roles for you. D. Create stacks in the Organizations management account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets drift detection. D is good option too as StackSets drift detection is a good option to have but not a requirement. It only saves from future troubleshooting of drift scenarios.
upvoted 1 times
...
nharaz
7 months ago
Selected Answer: C
D is wrong - Drift Detection identifies unmanaged changes (Outside CloudFormation)
upvoted 2 times
...
jainparag1
9 months ago
Selected Answer: C
I'll go with C since it satisfies all the requirements with minimum operational overhead. But wondering if "Stack Sets drift detection" is just a distractor here. Can someone throw some light on this?
upvoted 2 times
ninomfr64
8 months, 1 week ago
I am not an expert, just sharing my thoughts: "Stack Sets drift detection" is a feature of stack set, however this is not needed according to the scenario. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-drift.html. D is a no-go for me because it deploys in each managed account without making use of stack sets, so you cannot then use stack sets drift detection.
upvoted 1 times
...
...
daz2023
10 months, 4 weeks ago
Selected Answer: C
C is the right answer
upvoted 1 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: C
C no brainer
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: C
Create a stack set in the Organizations management account.
upvoted 2 times
...
spd
1 year, 6 months ago
Selected Answer: C
Stack Set in Mgmt account
upvoted 2 times
...
Atila50
1 year, 8 months ago
I THINK I SHOULD BE A
upvoted 1 times
...
Question #39 Topic 1

A company wants to migrate its workloads from on premises to AWS. The workloads run on Linux and Windows. The company has a large on-premises infrastructure that consists of physical machines and VMs that host numerous applications.

The company must capture details about the system configuration, system performance, running processes, and network connections of its on-premises workloads. The company also must divide the on-premises applications into groups for AWS migrations. The company needs recommendations for Amazon EC2 instance types so that the company can run its workloads on AWS in the most cost-effective manner.

Which combination of steps should a solutions architect take to meet these requirements? (Choose three.)

  • A. Assess the existing applications by installing AWS Application Discovery Agent on the physical machines and VMs.
  • B. Assess the existing applications by installing AWS Systems Manager Agent on the physical machines and VMs.
  • C. Group servers into applications for migration by using AWS Systems Manager Application Manager.
  • D. Group servers into applications for migration by using AWS Migration Hub.
  • E. Generate recommended instance types and associated costs by using AWS Migration Hub.
  • F. Import data about server sizes into AWS Trusted Advisor. Follow the recommendations for cost optimization.
Reveal Solution Hide Solution

Correct Answer: BDE 🗳️

Community vote distribution
ADE (95%)
5%

bititan
Highly Voted 1 year, 7 months ago
Selected Answer: ADE
trusted advisor doesn't have option to upload data, so option F is irrelavent
upvoted 23 times
...
ninomfr64
Highly Voted 8 months, 1 week ago
Selected Answer: ADE
A vs B -> A because we need to use AWS Application Discovery and it provides its own agent https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-agent.html C vs D -> D because AWS Application Discovery is integrated with AWS Migration Hub and it can be used to group servers into applications https://aws.amazon.com/migration-hub/faqs/#:~:text=How%20do%20I%20group%20servers%20into%20an%20application%3F E vs. F -> E as AWS Migration Hub allows to generate recommendation for instance types https://docs.aws.amazon.com/migrationhub/latest/ug/ec2-recommendations.html
upvoted 6 times
...
MAZIADI
Most Recent 2 weeks ago
Selected Answer: ADE
Why not B ? B. Assess the existing applications by installing AWS Systems Manager Agent on the physical machines and VMs: Explanation: AWS Systems Manager Agent is used for managing and automating tasks on EC2 instances, not for capturing detailed application and performance data during an assessment phase. AWS Application Discovery Agent is more appropriate for this purpose.
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: ADE
ADE is correct
upvoted 1 times
...
8608f25
6 months, 2 weeks ago
Selected Answer: ADE
The correct answers are: * A. Assess the existing applications by installing AWS Application Discovery Agent on the physical machines and VMs. The AWS Application Discovery Service helps gather detailed information about on-premises data centers, including servers, network dependencies, and performance metrics. * D. Group servers into applications for migration by using AWS Migration Hub. AWS Migration Hub provides a centralized location to track the progress of application migrations across multiple AWS and partner solutions. It allows grouping discovered servers into applications, which simplifies the organization of migration tasks. * E. Generate recommended instance types and associated costs by using AWS Migration Hub. After servers are discovered and grouped into applications, AWS Migration Hub can analyze the collected data to recommend suitable Amazon EC2 instance types. This ensures that the migrated applications are hosted on the most cost-effective resources.
upvoted 3 times
...
Simon523
11 months, 3 weeks ago
Selected Answer: ADE
https://aws.amazon.com/tw/blogs/mt/using-aws-migration-hub-network-visualization-to-overcome-application-and-server-dependency-challenges/
upvoted 2 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: ADE
ADE no brainer
upvoted 1 times
...
ZK000001qws
1 year, 2 months ago
B in incorrect as System Manager doesn't do discovery however, SSM Agent makes it possible for Systems Manager to update, manage, and configure the resources in AWS as well as on-premises. ADE
upvoted 3 times
...
asifjanjua88
1 year, 4 months ago
ADE is correct answer.
upvoted 1 times
...
Jacky_exam
1 year, 4 months ago
Selected Answer: ADE
https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-agent.html https://docs.aws.amazon.com/migrationhub/latest/ug/ec2-recommendations.html
upvoted 2 times
...
hgc2023
1 year, 5 months ago
B is incorrect because the servers are on prem.
upvoted 1 times
ninomfr64
8 months, 1 week ago
SSM can be installed on on-premise server. This is not the point for not picking B
upvoted 1 times
...
...
dev112233xx
1 year, 5 months ago
Selected Answer: ADE
ADE no doubts ✅
upvoted 1 times
...
God_Is_Love
1 year, 6 months ago
Logical answer : Falls under the domain "Accelerate Workload Migration and Modernization" promoting MigrationHub Step 1 - Identify the apps Step 2 - Group them Step 3 - Before hand, find out what instance types would need to be in when actual migration happens https://d1.awsstatic.com/Product-Page-Diagram_AWS-Migration-Hub-Orchestrator%402x.0c34c9483d13ebd26cf9072193384a58531624f3.png For OnPremises migrations, first phase is Discovery which can be done with Discovery agent , A https://d1.awsstatic.com/products/application-discovery-service/Product-Page-Diagram_AWS-Application-Discovery-Service%201.9d81c27f3de50349a9406b8def61b8eb914e2930.png I wont go with Trusted Advisor although it advises how cost can be advised because- This applies for already aws available environment. Here, about to get migrated into AWS and Architects need to discover lot of info before hand to plan alot. So I choose E between E and F. My answer - A,D,E
upvoted 2 times
...
aws0909
1 year, 6 months ago
Why Option C Group servers into applications for migration by using AWS Systems Manager Application Manager is incorrect?
upvoted 1 times
sambb
1 year, 6 months ago
AWS SSM Application Manager is used for existing resources deployed to AWS
upvoted 1 times
...
...
moota
1 year, 6 months ago
Selected Answer: ADE
A is better than B. > Agent-based discovery can be performed by deploying the AWS Application Discovery Agent on each of your VMs and physical servers. The agent installer is available for Windows and Linux operating systems. It collects static configuration data, detailed time-series system-performance information, inbound and outbound network connections, and processes that are running. https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html
upvoted 1 times
...
boomx
1 year, 7 months ago
BDE. Trusted Advisor is not for onprem assessments. Migration hub does EC2 ones
upvoted 1 times
...
zhangyu20000
1 year, 7 months ago
ADE is my answer
upvoted 3 times
...
masetromain
1 year, 7 months ago
Selected Answer: ADF
in order to meet the requirements of capturing details about the system configuration, system performance, running processes, and network connections of on-premises workloads, the company should install the AWS Application Discovery Agent on the physical machines and VMs. This will allow the company to assess the existing applications and gather information about their system configurations, performance, and network connections. To group servers into applications for migration, the company should use the AWS Migration Hub. This will allow the company to organize their servers and applications in a way that makes migration to AWS more manageable and efficient.
upvoted 2 times
masetromain
1 year, 7 months ago
In order to generate recommended instance types and associated costs, the company should use AWS Trusted Advisor. Trusted Advisor can analyze the data collected by the Application Discovery Agent and provide recommendations for cost-optimized EC2 instances that will be suitable for the company's workloads. This will allow the company to run their workloads on AWS in the most cost-effective manner. Option E, which involves generating recommended instance types and associated costs using AWS Migration Hub, is not the best choice for cost optimization, Trusted Advisor is a service that analyzes the resources in your AWS environment and provides recommendations to help you save money, improve system performance, or close security gaps.
upvoted 1 times
shputhan
1 year, 7 months ago
I think option E is correct. Considering the fact Trusted Advisor provides suggestion based on utilization of resources which is already deployed in AWS. Whereas Migration Hub can suggest recommended EC2 instances. https://docs.aws.amazon.com/migrationhub/latest/ug/ec2-recommendations.html
upvoted 7 times
...
ZwAi777
11 months ago
E should have mentioned Migration Evaluator since ME provides cost evaluation and right sizing info. My thoughts
upvoted 1 times
...
...
God_Is_Love
1 year, 6 months ago
Hey Maestro, appreciate your responses man..but you are wrong in this question. E is correct because this is for on premises requirement. F is correct in aws environment. ADE should be correct. I gave detailed logical answer as well if you are interested in other comments area
upvoted 3 times
...
...
Question #40 Topic 1

A company is hosting an image-processing service on AWS in a VPC. The VPC extends across two Availability Zones. Each Availability Zone contains one public subnet and one private subnet.

The service runs on Amazon EC2 instances in the private subnets. An Application Load Balancer in the public subnets is in front of the service. The service needs to communicate with the internet and does so through two NAT gateways. The service uses Amazon S3 for image storage. The EC2 instances retrieve approximately 1 ТВ of data from an S3 bucket each day.

The company has promoted the service as highly secure. A solutions architect must reduce cloud expenditures as much as possible without compromising the service’s security posture or increasing the time spent on ongoing operations.

Which solution will meet these requirements?

  • A. Replace the NAT gateways with NAT instances. In the VPC route table, create a route from the private subnets to the NAT instances.
  • B. Move the EC2 instances to the public subnets. Remove the NAT gateways.
  • C. Set up an S3 gateway VPC endpoint in the VPAttach an endpoint policy to the endpoint to allow the required actions on the S3 bucket.
  • D. Attach an Amazon Elastic File System (Amazon EFS) volume to the EC2 instances. Host the images on the EFS volume.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: C
C. Setting up an S3 gateway VPC endpoint in the VPC and attaching an endpoint policy to the endpoint will allow the EC2 instances to securely access the S3 bucket for image storage without the need for NAT gateways, reducing costs without compromising security or increasing ongoing operations. This option reduces the costs associated with the NAT gateways and allows for faster data retrieval from the S3 bucket as traffic does not have to go through the internet gateway.
upvoted 15 times
...
God_Is_Love
Highly Voted 1 year, 6 months ago
The only reason for C is - Gateway endpoints are not Billed and so cost effective (https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#types-of-vpc-endpoints-for-s3) If the question changes from single region to across region, the answer would be B (overhead of NAT gateways and traversing TBs of data across NAT is expensive) because gateway endpoints are region specific
upvoted 7 times
anita_student
1 year, 6 months ago
B wouldn’t be highly secure and data transfer would also be slower
upvoted 1 times
...
...
8608f25
Most Recent 6 months, 2 weeks ago
Selected Answer: C
Option C is the most cost-effective solution that maintains the service’s security posture. An S3 gateway VPC endpoint allows private connections between the VPC and S3 without requiring traffic to go through the internet or NAT gateways. This eliminates the need for NAT gateways when accessing S3, which can significantly reduce costs, especially considering the 1 TB of data retrieved daily from S3. Endpoint policies ensure that the security posture is not compromised by allowing only the required actions on the specific S3 bucket.
upvoted 1 times
...
grire974
7 months, 2 weeks ago
Any chance someone could fix the typo in the correct answer; "VPC. Attach..." instead of VPAttach; terribly misleading.
upvoted 1 times
...
daz2023
10 months, 4 weeks ago
Selected Answer: C
C for using an endpoint.
upvoted 2 times
...
NikkyDicky
1 year, 1 month ago
C of course
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: C
C is the Correct option as S3 Gateway will reduce the cost for NAT gateway
upvoted 2 times
...
mfsec
1 year, 5 months ago
Selected Answer: C
Set up an S3 gateway VPC endpoint
upvoted 3 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: C
C - easy one ✅
upvoted 3 times
...
zozza2023
1 year, 6 months ago
Selected Answer: C
C for sure
upvoted 4 times
...
Question #41 Topic 1

A company recently deployed an application on AWS. The application uses Amazon DynamoDB. The company measured the application load and configured the RCUs and WCUs on the DynamoDB table to match the expected peak load. The peak load occurs once a week for a 4-hour period and is double the average load. The application load is close to the average load for the rest of the week. The access pattern includes many more writes to the table than reads of the table.

A solutions architect needs to implement a solution to minimize the cost of the table.

Which solution will meet these requirements?

  • A. Use AWS Application Auto Scaling to increase capacity during the peak period. Purchase reserved RCUs and WCUs to match the average load.
  • B. Configure on-demand capacity mode for the table.
  • C. Configure DynamoDB Accelerator (DAX) in front of the table. Reduce the provisioned read capacity to match the new peak load on the table.
  • D. Configure DynamoDB Accelerator (DAX) in front of the table. Configure on-demand capacity mode for the table.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (70%)
B (18%)
12%

zhangyu20000
Highly Voted 1 year, 7 months ago
A is correct. On demand mode is for unknown load pattern, auto scaling is for know burst pattern
upvoted 25 times
AimarLeo
6 months, 3 weeks ago
But the pattern here is known.. 4 hours peak time etc.. not sure if that would be the write answer
upvoted 1 times
...
How AWS Application Auto Scaling scale the read/write performance of DynamoDB?
upvoted 1 times
tannh
11 months, 3 weeks ago
You can scale DynamoDB tables and global secondary indexes using target tracking scaling policies and scheduled scaling. https://docs.aws.amazon.com/autoscaling/application/userguide/services-that-can-integrate-dynamodb.html
upvoted 1 times
...
...
...
ccort
Highly Voted 1 year, 7 months ago
Selected Answer: A
A on-demand prices can be 7 times higher, given the options it is better to have reserved WCU and RCU and auto scale in the given schedule
upvoted 16 times
...
subbupro
Most Recent 2 days, 8 hours ago
I think B is correct. because reserved is not required, ondemand would be better because it requireds only 4 hours per week. so B would be better. Autoscaling of the application can not impact dynamo db tables.
upvoted 1 times
...
vn_hunglv
1 month ago
Selected Answer: A
Tôi chọn A
upvoted 1 times
...
zolthar_z
1 month, 1 week ago
Selected Answer: A
Auto-scaling is for known traffic pattern, On-demand is for unknown traffic patter and also could be more expensive
upvoted 2 times
...
Malcnorth59
3 months ago
Selected Answer: A
AWS documentation suggests A is correct: https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html
upvoted 1 times
...
Kubernetes
4 months ago
A is correct. The focus is minimizing the cost of tables.
upvoted 1 times
...
mav3r1ck
5 months ago
Selected Answer: B
Considering the application's need to handle a peak load that is double the average and the fact that the workload is write-heavy, option B (Configure on-demand capacity mode for the table) is the most suitable solution. It directly addresses the variability in workload without requiring upfront capacity planning or additional management overhead, thus likely providing the best cost optimization for this scenario. On-demand capacity mode eliminates the need to scale resources manually or through Auto Scaling and ensures that you only pay for the write and read throughput you consume.
upvoted 2 times
mav3r1ck
5 months ago
A. AWS Application Auto Scaling with Reserved Capacity Pros: Auto Scaling allows you to automatically adjust the provisioned throughput to meet demand, and purchasing reserved RCUs and WCUs can reduce costs for the capacity you know you'll consistently use. Cons: This option might not be as cost-effective for workloads with significant variability and a high write-to-read ratio, especially if the peak load is much higher than the average load. Reserved capacity benefits consistent usage patterns, but the peak load being double the average may not be fully optimized here.
upvoted 1 times
...
mav3r1ck
5 months ago
B. On-demand Capacity Mode Pros: On-demand capacity mode is ideal for unpredictable workloads because it automatically scales to accommodate the load without provisioning. You pay for what you use without managing capacity planning. This mode is particularly suitable for the described scenario where the load spikes significantly and unpredictably. Cons: While potentially more expensive per unit than provisioned capacity with auto-scaling, it eliminates the risk of over-provisioning or under-provisioning.
upvoted 1 times
...
...
kz407
5 months, 1 week ago
Selected Answer: A
A is badly worded however, because it says "application" autoscaling. We are not talking about that here. Either it should be reworded as "DynamoDB autoscaling" for the answer to be correct. On-demand capacity mode is for unknown read/write patterns. Since the load change patterns are known, anything that involves on-demand capacity modes can be eliminated (hence not B). DAX is a caching service deployed in front of DynamoDB. It is geared towards "performance at scale". Problem in the use case, is to optimize table costs. Using DAX will incur additional costs. Hence anything that involves DAX (C and D) can also be eliminated.
upvoted 2 times
Malcnorth59
3 months ago
I initially thought the same but the AWS definition of Application autoscaling listed here includes DynamoDB: https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html
upvoted 1 times
...
...
anubha.agrahari
5 months, 3 weeks ago
Selected Answer: A
https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/#:~:text=You%20can%20approximate%20a%20blend,save%20money%20as%20reserved%20capacity
upvoted 2 times
...
8608f25
6 months, 2 weeks ago
Selected Answer: B
Option B is the most cost-effective solution for workloads with significant fluctuations and unpredictable access patterns. The on-demand capacity mode automatically adjusts the table’s throughput capacity as needed in response to actual traffic, eliminating the need to manually configure or manage capacity. This mode is ideal for applications with irregular traffic patterns, such as a significant peak once a week, because you only pay for the read and write requests your application performs, without having to provision throughput in advance. Option B directly addresses the requirement to minimize costs associated with fluctuating loads, especially when the load significantly exceeds the average only during a brief period, by leveraging DynamoDB’s on-demand capacity mode to automatically scale and pay only for what is used.
upvoted 1 times
...
igor12ghsj577
6 months, 3 weeks ago
Selected Answer: A
I think there is mistake in answer A, and it should be DynamoDb auto scaling instead of application autos calling. Or application and dynamoDB auto scaling.
upvoted 1 times
igor12ghsj577
6 months, 3 weeks ago
Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling. When the workload decreases, Application Auto Scaling decreases the throughput so that you don't pay for unused provisioned capacity.
upvoted 2 times
...
...
jpa8300
7 months, 4 weeks ago
Selected Answer: D
I choose option D, because DAX is not only an accelerator for the Reads, it also cache releasing a lot of load from the DB.
upvoted 1 times
...
ninomfr64
8 months, 1 week ago
Selected Answer: A
A -> You can scale DynamoDB tables and global secondary indexes using target tracking scaling policies and scheduled scaling. In this I would go for scheduled scaling. https://docs.aws.amazon.com/autoscaling/application/userguide/services-that-can-integrate-dynamodb.html B -> on-demand capacity mode is for unknown workload, this is not the case C -> DAX come with costs and it helps with reads, while here we have a more write-bound workload D -> See B and C comments
upvoted 2 times
...
severlight
9 months, 2 weeks ago
Selected Answer: A
we use scheduled scaling here
upvoted 1 times
...
whenthan
10 months, 1 week ago
Selected Answer: A
https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/#:~:text=You%20can%20approximate%20a%20blend,save%20money%20as%20reserved%20capacity.
upvoted 1 times
...
Simon523
11 months, 3 weeks ago
Selected Answer: A
Reserved capacity is available for single-Region, provisioned read and write capacity units (RCU and WCU) on DynamoDB tables including global and local secondary indexes. You cannot purchase reserved capacity for replicated WCUs (rWCUs).
upvoted 2 times
...
awsent
11 months, 3 weeks ago
Correct Answer: A Application auto scaling can be used for scheduled scaling for DynamoDB tables and GSIs https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html
upvoted 1 times
...
sontls
12 months ago
aababasdasdasdasd
upvoted 1 times
...
venvig
1 year ago
Selected Answer: A
Refer https://aws.amazon.com/dynamodb/reserved-capacity/ Reserved capacity is a great option to reduce DynamoDB costs for workloads with steady usage and predictable growth over time Reserved capacity mode might be best if you: Have predictable application traffic. Run applications whose traffic is consistent or ramps gradually. Can forecast capacity requirements to control costs.
upvoted 2 times
...
uC6rW1aB
1 year ago
Selected Answer: B
A. This approach takes into account peak and average loads, but it might lead to unnecessary costs since you have to pay for reserved RCUs and WCUs, even during off-peak times. B. The on-demand capacity mode can adjust dynamically based on actual demand, making it a suitable option, especially considering the peak lasts only for 4 hours. C. DAX is designed to accelerate read operations, but the problem description indicates the access pattern is primarily write-focused. Therefore, this option might not be the best choice. D. This option combines DAX with the on-demand capacity mode, but as mentioned, DAX might not be necessary. Conclusion: Option B (configuring the table for on-demand capacity mode) seems to be the most appropriate choice, as it allows for dynamic capacity scaling during peaks and only pays for the required capacity costs during off-peak times.
upvoted 3 times
Yes I am also not sure about option B & D
upvoted 1 times
...
subbupro
8 months, 3 weeks ago
A is correct, reserved is only for average load which is less than ondemand . So A is corrclect
upvoted 2 times
...
grire974
7 months, 2 weeks ago
Yeh B is listed as correct in Neal's udemy exam set says for this question. However if performance isn't mentioned (Dynamo throttling can occur with reserved capacity); I think A is best if there's a known average & the reserved amount is for the average. Man it would be great if there was some consensus among mock exam providers. FML.
upvoted 1 times
...
...
ggrodskiy
1 year ago
Correct B. Option A uses AWS Application Auto Scaling, which is a service that helps you adjust provisioned capacity automatically in response to actual traffic patterns. However, this option requires you to purchase reserved RCUs and WCUs, which are commitments to pay for a minimum amount of capacity for a specific term. This option can be more expensive and less flexible than on-demand capacity modehttps://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/
upvoted 2 times
b3llman
1 year ago
If you already know the usage patterns, you save $$ by purchasing reserved RCUs and WCUs. It is what you want to do to save $$ because you will definitely use the reserved units, and what goes beyond that is what autoscaling is for.
upvoted 1 times
...
...
Jonalb
1 year, 1 month ago
Selected Answer: A
A is correct, very correct!
upvoted 2 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: D
A won't work cause you reserve for average load, so peak demand will result in errors between B and D, D provides an addition, even if small benefit for reads
upvoted 1 times
NikkyDicky
1 year, 1 month ago
Changing to A after re-reading DDB autoscaling - it actually changes provisioned capacity, so should work
upvoted 1 times
...
grire974
7 months, 2 weeks ago
How does DAX reduce cost? It requires adding ec2 instances into the solution to power your DAX cluster; and the workload is write intensive. I think DAX is for performance; less so cost; perhaps cost if it was extremely read intensive.
upvoted 1 times
...
...
[Removed]
1 year, 2 months ago
Selected Answer: B
The question states the application is WCU heavy, so DAX will have minimal impact on reducing load/cost, and comes with its own costs, which excludes C and D. It doesn't matter whether the performance needs are unpredictable or not, what matters is they are variable, and that under-performing has been ruled out by the question. So the choice is between provisioning at a constant level high enough to cope with the 4h peak, or provisioning at a level that varies. DDB provides no native mechanism other than on-demand to alter the provisioning levels over time, so B is the answer here. On-demand R/WCU usage isn't any more expensive than explicitly provisioned usage, per unit. The difference is that on-demand usage removes the upper limit on provisioning, so if the application wants to use more, it can, and you pay for it. So for the 4h a week the app needs double the WCU level, DDB will provide it, and the cost per hour will be twice as high, but for the rest of the week the cost will be the same as if you had explicitly provisioned the lower level.
upvoted 2 times
...
ailves
1 year, 2 months ago
Selected Answer: A
because on-demand is cheeper for unpredictable patterns, we can't choose B, C, D
upvoted 3 times
...
0r3m
1 year, 2 months ago
Selected Answer: D
This solution meets the requirements by using Application Auto Scaling to automatically increase capacity during the peak period, which will handle the double the average load. And by purchasing reserved RCUs and WCUs to match the average load, it will minimize the cost of the table for the rest of the week when the load is close to the average.
upvoted 1 times
...
andreitugui
1 year, 2 months ago
Selected Answer: B
Since the application load is close to the average load for most of the week and the peak load only occurs once a week for a limited 4-hour period, it is not necessary to provision and pay for provisioned capacity (RCUs and WCUs) to match the peak load. On-demand capacity mode provides the flexibility to automatically scale based on the actual load, allowing you to optimize costs by paying only for the resources consumed during those peak periods.
upvoted 2 times
...
EricZhang
1 year, 3 months ago
A - incorrect. as when peak hour comes, the dynamodb table will throw throttling error C & D - incorrect. DAX is for app which is read-intensive B - have to choose this
upvoted 1 times
...
gameoflove
1 year, 3 months ago
Selected Answer: A
On Demand Mode is cost optimize
upvoted 1 times
...
rajalek
1 year, 4 months ago
Utilize on-demand capacity mode for the DynamoDB table - this mode allows the table to automatically scale up and down its capacity based on the actual usage. This means that during the peak load, the table will scale up to handle the increased traffic and scale down during periods of lower traffic. Since the peak load occurs once a week for a 4-hour period, the table will only pay for the resources it actually uses during that time and will not be over-provisioned for the rest of the week. Answer B
upvoted 1 times
...
mikad
1 year, 4 months ago
A is the answer , using DynamoDB Accelerator (DAX) will be good for Applications that are read-intensive, not write-intensive.https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.html#DAX.use-cases
upvoted 1 times
...
takecoffe
1 year, 4 months ago
Selected Answer: D
DAX is better choice
upvoted 2 times
...
hgc2023
1 year, 5 months ago
read and write units are more expensive in on demand mode so I don't think D is the answer
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: A
Use AWS Application Auto Scaling makes the most sense
upvoted 2 times
igor12ghsj577
6 months, 3 weeks ago
how application auto scaling which only uses DB can help you to decrease cost of Database itself ?
upvoted 1 times
...
...
Dimidrol
1 year, 5 months ago
Selected Answer: A
A for me, not B. On-demand is ideal for bursty, new, or unpredictable workloads whose traffic can spike in seconds or minutes, and when underprovisioned capacity would impact the user experience.
upvoted 3 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: D
D - no doubts.. In addition to on-demand, DAX can reduce the Dynamodb cost up to 60%✅
upvoted 2 times
...
kiran15789
1 year, 5 months ago
Selected Answer: A
Will go with A in exam as peak load is known
upvoted 3 times
...
kiran15789
1 year, 5 months ago
Selected Answer: A
tuning dynamo db is not sufficient, you also need to scale the applicaiton to meet peak loads
upvoted 2 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: D
Answer D makes sense. On-demand is the good option for infrequent access to dynamoDB. A option requires a code refactoring
upvoted 2 times
...
Sarutobi
1 year, 6 months ago
Selected Answer: B
In this link https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/ I found this: "DynamoDB on-demand is useful if your application traffic is difficult to predict and control, your workload has large spikes of short duration, or if your average table utilization is well below the peak." I think this is very close to what we are looking for so maybe B.
upvoted 1 times
sambb
1 year, 6 months ago
Here, the traffic is predicted "The peak load occurs once a week for a 4-hour period and is double the average load". Hence, with AWS Autoscaling we can schedule the WCU scaling, which would be way cheaper than on-demand capacity.
upvoted 2 times
...
...
God_Is_Love
1 year, 6 months ago
OnDemand when needed is good, but here, we know that only 4 hours is peak. So better purchase the reserved RCUs/WCUs and on top of it enable auto scaling which meets 4 hour high demand. DAX is a extreme performant cache cluster. DAX is not ideal for write intensitive, that does not mean use DAX for reads. Look at where DAX does not fit -https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.html Here for reducing costs,A is correct. See here how provisoned reduce costs- https://aws.amazon.com/dynamodb/pricing/?refid=ce6876ca-ceb9-46a2-adaa-d36fce8fbba7
upvoted 3 times
...
c73bf38
1 year, 6 months ago
Selected Answer: A
A. Use AWS Application Auto Scaling to increase capacity during the peak period. Purchase reserved RCUs and WCUs to match the average load. Since the peak period is only 4 hours a week and the application load is close to the average load for the rest of the week, it is not cost-effective to configure on-demand capacity mode for the table. Instead, AWS Application Auto Scaling can be used to increase the RCUs and WCUs during the peak period to meet the increased demand, and then decrease them to match the average load for the rest of the week. Additionally, reserved capacity can be purchased to match the average load, further reducing costs. Using DynamoDB Accelerator (DAX) in front of the table does not directly address the issue of cost optimization.
upvoted 2 times
...
zozza2023
1 year, 6 months ago
Selected Answer: A
has nothing with DAX here. between A and B==> A is the answer
upvoted 3 times
moota
1 year, 6 months ago
DAX is useful for read-intensive loads.
upvoted 2 times
moota
1 year, 6 months ago
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.html#DAX.use-cases
upvoted 2 times
...
vvahe
1 year, 5 months ago
This, DAX is not an option, on demand isn't either, leaves A
upvoted 1 times
...
...
...
pravi1
1 year, 7 months ago
A makes sense here. On-demand more costly compared to reserved ones.
upvoted 2 times
...
DDONG
1 year, 7 months ago
A SAPC01 #1005
upvoted 1 times
...
masetromain
1 year, 7 months ago
Selected Answer: B
B. Configure on-demand capacity mode for the table. This solution will allow the table to automatically scale its capacity based on the actual usage, and will minimize the cost of the table as it will only pay for the capacity used during the peak load period, and not the entire week. Additionally, since the access pattern includes more writes than reads, on-demand capacity mode is a good fit as it is more cost-effective for write-heavy workloads.
upvoted 3 times
masetromain
1 year, 7 months ago
Option D is a possible solution that could meet the requirements, as it leverages DynamoDB Accelerator (DAX) to improve the performance of read operations on the table and also configures on-demand capacity mode for the table which will minimize the cost as it only charges for the requests made to the table. However, it's important to consider that DAX will add some costs to the solution, and it's not guaranteed that the on-demand capacity mode will be enough to handle the peak load, so it's important to monitor the table and make sure that the performance is meeting the expectations.
upvoted 1 times
...
...
Question #42 Topic 1

A solutions architect needs to advise a company on how to migrate its on-premises data processing application to the AWS Cloud. Currently, users upload input files through a web portal. The web server then stores the uploaded files on NAS and messages the processing server over a message queue. Each media file can take up to 1 hour to process. The company has determined that the number of media files awaiting processing is significantly higher during business hours, with the number of files rapidly declining after business hours.

What is the MOST cost-effective migration recommendation?

  • A. Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in an Amazon S3 bucket.
  • B. Create a queue using Amazon MQ. Configure the existing web server to publish to the new queue. When there are messages in the queue, create a new Amazon EC2 instance to pull requests from the queue and process the files. Store the processed files in Amazon EFS. Shut down the EC2 instance after the task is complete.
  • C. Create a queue using Amazon MQ. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in Amazon EFS.
  • D. Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process the files. Scale the EC2 instances based on the SQS queue length. Store the processed files in an Amazon S3 bucket.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (96%)
2%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: D
The correct answer would be option D. This option suggests creating a queue using Amazon SQS, configuring the existing web server to publish to the new queue, and using EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process the files. The EC2 instances can be scaled based on the SQS queue length, which ensures that the resources are available during peak usage times and reduces costs during non-peak times. Option A is not correct because it suggests using AWS Lambda which has a maximum execution time of 15 minutes. Option B is not correct because it suggests creating a new EC2 instance for each message in the queue, which is not cost-effective. Option C is not correct because it suggests using Amazon EFS, which is not a suitable option for long-term storage of large files.
upvoted 21 times
...
ninomfr64
Highly Voted 8 months ago
Selected Answer: D
Not A - Lambda max execution time is 15 minutes, image processing can take up to 1 hour Not B - Amazon MQ is not needed (more expensive then SQS) and EFS is more expensive then S3 Not C - Amazon MQ is not needed (more expensive then SQS) and Lambda max execution time is 15 minutes, image processing can take up to 1 hour D does the job with the lower cost thanks to SQS, S3 and EC2 Auto Scaling Group
upvoted 7 times
...
Malcnorth59
Most Recent 3 months ago
Selected Answer: D
Lambda will not work, so A is not possible. D is going to be the most cost-effective as the resources will scale based on queue length.
upvoted 1 times
...
mav3r1ck
5 months ago
Selected Answer: D
Given the need to process files that can take up to 1 hour each and the variability in workload, option D (Amazon SQS, EC2 Auto Scaling, and S3) appears to be the most cost-effective and practical solution. It leverages SQS for queue management, enabling efficient handling of the processing queue's variability. EC2 Auto Scaling allows for flexible and cost-effective scaling of processing capacity, ramping up during high-demand periods and scaling down when demand wanes, thus optimizing costs. Finally, Amazon S3 offers a highly durable and cost-effective solution for storing the processed media files. This option provides the necessary flexibility for long processing tasks while efficiently managing the variable demand and optimizing storage costs.
upvoted 1 times
...
Simon523
11 months, 2 weeks ago
Selected Answer: D
Simple Queuing Service SQS is based on pull model. Here are some of the important features: Reliable, scalable, fully-managed message queuing service High availability Unlimited scaling Auto scale to process billions of messages per day Low cost (Pay for use)
upvoted 1 times
...
aviathor
12 months ago
Selected Answer: D
This is quite simple. Any answer (A and C) consisting of using Lambda for processing the files is out because of the 15 minutes limit on Lambda processes. B is out because using EFS is expensive and it does not specify how to launch and terminate the EC2 instances. Amazon MQ is not required either. This leaves D which uses SQS, Auto Scaling Groups and publishes the resulting files to S3.
upvoted 2 times
...
chico2023
1 year ago
Selected Answer: D
Answer: D You can eliminate A and C right in the beginning: Lambda functions can run up to 15 minutes. B won't help much as you need to create new EC2 instances (manually, apparently) and EFS is more expensive than S3.
upvoted 1 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: D
d for sure
upvoted 1 times
...
ailves
1 year, 2 months ago
Selected Answer: D
Because of "Each media file can take up to 1 hour to process" and we know Lambda has a limit in 15 minutes, The correct answer is D
upvoted 1 times
...
EricZhang
1 year, 3 months ago
D - https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
upvoted 1 times
...
huanaws088
1 year, 4 months ago
Selected Answer: B
I sure is B , becauce 1. SQS , SNS are " cloud - native " services : proprietary protocols from AWS 2. Traditional applications running from on - premises may use open protocols such as : MQTT , AMQP ,.., so When migrating to the cloud , instead of re-engineering the application to use SQS and SNS will very expensive, we can use Amazon MQ. 3. Amazon MQ doesn't " scale " as much as SQS / SNS Amazon MQ runs on servers but Amazon MQ has both queue feature ( ~ SQS ) and topic features ( ~ SNS ) https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-difference-from-amazon-mq-sns.html
upvoted 1 times
hexie
1 year, 1 month ago
In terms of cost (which is a point on the question), Amazon SQS is generally more cost-effective compared to Amazon MQ for this specific use case. SQS pricing is based on the number of requests and message data transfer, whereas Amazon MQ pricing includes additional costs associated with broker instances and data transfer.
upvoted 1 times
...
...
takecoffe
1 year, 4 months ago
Selected Answer: D
SQS and autoscaling no doubt answer is D
upvoted 2 times
...
mfsec
1 year, 5 months ago
Selected Answer: D
SQS and Auto Scaling
upvoted 2 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: D
D - makes sense.. Lambda can’t run more than 15m. And Amazon MQ is only recommended when migrating existing message brokers that rely on compatibility with APIs such as JMS or protocols such as AMQP, MQTT, OpenWire, and STOMP.. in the question there is no mention for these services ..
upvoted 4 times
...
God_Is_Love
1 year, 6 months ago
A and C are out because lambda does not support more than 15 min. B says, to create an EC2 for each new message which is certainly not cost effective and bad design as well. So answer is D
upvoted 2 times
...
c73bf38
1 year, 6 months ago
Selected Answer: D
The most cost-effective migration recommendation to handle peak loads during business hours is to use Amazon SQS to create a queue, configure the existing web server to publish to the new queue, and use Amazon EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process the files. The EC2 instances should be scaled based on the SQS queue length. Storing the processed files in an Amazon S3 bucket will help in reducing the storage cost. This approach is scalable and can handle peak loads during business hours, while still being cost-effective during non-business hours. Option A is also a possible solution, but using EC2 instances in an EC2 Auto Scaling group is a more scalable and cost-effective solution. Options B and C involve using Amazon EFS, which can be more expensive than Amazon S3.
upvoted 2 times
...
zozza2023
1 year, 6 months ago
Selected Answer: D
D is the right answer
upvoted 2 times
...
Musk
1 year, 6 months ago
Selected Answer: D
Because A is not valid due to time
upvoted 2 times
...
pravi1
1 year, 7 months ago
D will be correct.
upvoted 1 times
...
zhangyu20000
1 year, 7 months ago
D is correct because it took 1 hour to process the file. Lambda only run 15 minutes
upvoted 1 times
...
masetromain
1 year, 7 months ago
Selected Answer: A
A. Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in an Amazon S3 bucket. This approach will be the most cost-effective as it uses serverless AWS Lambda to process the files, which only incurs charges while the function is running, and is therefore well suited for workloads with variable and unpredictable usage patterns. Additionally, using Amazon S3 for storage is a cost-effective option as it allows for the storage of large amounts of data at a low cost.
upvoted 1 times
Atila50
1 year, 7 months ago
Although this answer is the most cost-effective, AWS Lambda only allows functions to run up to 15 minutes.
upvoted 1 times
Atila50
1 year, 7 months ago
correct ans is D
upvoted 2 times
...
...
andctygr
1 year, 7 months ago
You cannot use Lambda function since the question mentioned "process time take up to 1 hour for processing" Aws Lambda functions can run only 15 minutes per function.
upvoted 1 times
masetromain
1 year, 7 months ago
https://www.examtopics.com/discussions/amazon/view/36333-exam-aws-certified-solutions-architect-professional-topic-1/ you are right, I was wrong despite the fact that I already knew this question. sorry
upvoted 3 times
...
...
...
Question #43 Topic 1

A company is using Amazon OpenSearch Service to analyze data. The company loads data into an OpenSearch Service cluster with 10 data nodes from an Amazon S3 bucket that uses S3 Standard storage. The data resides in the cluster for 1 month for read-only analysis. After 1 month, the company deletes the index that contains the data from the cluster. For compliance purposes, the company must retain a copy of all input data.

The company is concerned about ongoing costs and asks a solutions architect to recommend a new solution.

Which solution will meet these requirements MOST cost-effectively?

  • A. Replace all the data nodes with UltraWarm nodes to handle the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster.
  • B. Reduce the number of data nodes in the cluster to 2 Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Transition the input data to S3 Glacier Deep Archive after 1 month by using an S3 Lifecycle policy.
  • C. Reduce the number of data nodes in the cluster to 2. Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Add cold storage nodes to the cluster Transition the indexes from UltraWarm to cold storage. Delete the input data from the S3 bucket after 1 month by using an S3 Lifecycle policy.
  • D. Reduce the number of data nodes in the cluster to 2. Add instance-backed data nodes to handle the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (94%)
6%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: B
B is the most cost-effective solution as it reduces the number of data nodes in the cluster to 2 and adds UltraWarm nodes to handle the expected capacity. By configuring the indexes to transition to UltraWarm when OpenSearch Service ingests the data, the company can take advantage of the lower storage costs of UltraWarm. Additionally, by transitioning the input data to S3 Glacier Deep Archive after 1 month using an S3 Lifecycle policy, the company can further reduce costs by using the lower storage costs of S3 Glacier Deep Archive for long-term data retention.
upvoted 20 times
masetromain
1 year, 7 months ago
Option C can meet the requirements of reducing the number of data nodes in the cluster and using UltraWarm and cold storage nodes to handle the expected capacity and moving the data to lower cost storage after 1 month. However, it may not be the most cost-effective solution as it involves additional complexity in configuring the indexes to transition between different storage tiers, and may also require additional management and maintenance of the cold storage nodes. Option B, where the data is transitioned from S3 Standard to S3 Glacier Deep Archive using an S3 Lifecycle policy is simpler and more cost-effective as it eliminates the need for additional storage tiers and management.
upvoted 3 times
God_Is_Love
1 year, 6 months ago
B says to delete but question asks for saving on compliance purposes.
upvoted 5 times
God_Is_Love
1 year, 6 months ago
* I meant C says..
upvoted 5 times
...
...
...
...
Malcnorth59
Most Recent 3 months ago
Why can't I switch all nodes to ultrawarm. I can't find it anywhere in the documentation and it's not listed in the pre-requisites. Also why can the number of nodes be reduced from 10 to 2? is that because Ultrawarm use S3?
upvoted 1 times
...
sarlos
4 months ago
why not D?
upvoted 1 times
...
ninomfr64
8 months ago
I need help here: To use UltraWarm storage, domains must have dedicated master nodes as per doc https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ultrawarm.html The scenario mentions "an OpenSearch Service cluster with 10 data nodes". Assuming you only have these nodes in the cluster, in all answers you need to add dedicated master node(s). Assuming we also have dedicated master node why not replacing all data nodes with UltraWarm nodes?
upvoted 1 times
ninomfr64
8 months ago
I think I got it, UltraWarm is for read-only data. Thus you still need to have at least a data node
upvoted 1 times
...
...
venvig
1 year ago
Selected Answer: B
Option A says to replace all Data Nodes with ultra warm nodes. But this is NOT possible. There has to be atleast one data node
upvoted 3 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: B
B I think :/
upvoted 2 times
...
Damijo
1 year, 5 months ago
Selected Answer: A
If you look at the IAM documentation here, you can see that the ec2:AuthorizeSecurityGroupIngress action doesn't have any conditions that would allow you to specify the ip addresses in the inbound/outbound rules.https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonec2.html
upvoted 2 times
Jesuisleon
1 year, 2 months ago
I think you are referring All AWS Certified Solutions Architect - Professional SAP-C02 Questions, question 44. yes, I changed from D to A after reading this link.
upvoted 1 times
...
eddylynx
1 year, 1 month ago
You can specify the IP address with the CIDR parameter https://ec2.amazonaws.com/?Action=AuthorizeSecurityGroupIngress &GroupId=sg-112233 &IpPermissions.1.IpProtocol=tcp &IpPermissions.1.FromPort=3389 &IpPermissions.1.ToPort=3389 &IpPermissions.1.IpRanges.1.CidrIp=192.0.2.0/24 &IpPermissions.1.IpRanges.1.Description=Access from New York office https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AuthorizeSecurityGroupIngress.html
upvoted 1 times
...
...
dev112233xx
1 year, 5 months ago
Selected Answer: B
B - makes more sense
upvoted 4 times
...
Ajani
1 year, 5 months ago
UltraWarm provides a cost-effective way to store large amounts of read-only data on Amazon OpenSearch Service. Standard data nodes use "hot" storage, which takes the form of instance stores or Amazon EBS volumes attached to each node. Hot storage provides the fastest possible performance for indexing and searching new data.
upvoted 3 times
...
moota
1 year, 6 months ago
I asked ChatGPT. Can I use all UltraWarm nodes in AWS OpenSearch instead of data nodes? :) No, UltraWarm nodes in AWS OpenSearch are designed for storage and retrieval of infrequently accessed data, while data nodes are optimized for faster indexing and searching of data. While UltraWarm nodes can be used as a complement to data nodes, they are not a replacement for them.
upvoted 2 times
hobokabobo
1 year, 6 months ago
This eliminates option A
upvoted 2 times
...
...
Musk
1 year, 6 months ago
Selected Answer: B
Option B is the most cost-effective solution that meets the requirements. Reducing the number of data nodes in the cluster and adding UltraWarm nodes will help to reduce the ongoing costs of running the OpenSearch Service cluster. Configuring the indexes to transition to UltraWarm when OpenSearch Service ingests the data will further reduce costs. Additionally, transitioning the input data to S3 Glacier Deep Archive after 1 month by using an S3 Lifecycle policy will lower the storage costs of retaining the input data for compliance purposes.
upvoted 4 times
...
Question #44 Topic 1

A company has 10 accounts that are part of an organization in AWS Organizations. AWS Config is configured in each account. All accounts belong to either the Prod OU or the NonProd OU.

The company has set up an Amazon EventBridge rule in each AWS account to notify an Amazon Simple Notification Service (Amazon SNS) topic when an Amazon EC2 security group inbound rule is created with 0.0.0.0/0 as the source. The company’s security team is subscribed to the SNS topic.

For all accounts in the NonProd OU, the security team needs to remove the ability to create a security group inbound rule that includes 0.0.0.0/0 as the source.

Which solution will meet this requirement with the LEAST operational overhead?

  • A. Modify the EventBridge rule to invoke an AWS Lambda function to remove the security group inbound rule and to publish to the SNS topic. Deploy the updated rule to the NonProd OU.
  • B. Add the vpc-sg-open-only-to-authorized-ports AWS Config managed rule to the NonProd OU.
  • C. Configure an SCP to allow the ec2:AuthorizeSecurityGroupIngress action when the value of the aws:SourceIp condition key is not 0.0.0.0/0. Apply the SCP to the NonProd OU.
  • D. Configure an SCP to deny the ec2:AuthorizeSecurityGroupIngress action when the value of the aws:SourceIp condition key is 0.0.0.0/0. Apply the SCP to the NonProd OU.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
D (59%)
A (38%)
2%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: D
The solution that meets this requirement with the LEAST operational overhead is D. Configuring an SCP to deny the ec2:AuthorizeSecurityGroupIngress action when the value of the aws:SourceIp condition key is 0.0.0.0/0, and applying the SCP to the NonProd OU. This solution would prevent the security group inbound rule from being created in the first place and will not require any additional steps or actions to be taken in order to remove the rule. This is less operationally intensive than modifying the EventBridge rule to invoke an AWS Lambda function, adding a Config rule or allowing the ec2:AuthorizeSecurityGroupIngress action with a specific IP.
upvoted 51 times
masetromain
1 year, 7 months ago
Option C does not meet the requirement that the security team needs to remove the ability to create a security group inbound rule that includes 0.0.0.0/0 as the source. It only allows the ec2:AuthorizeSecurityGroupIngress action when the value of the aws:SourceIp condition key is not 0.0.0.0/0. It does not prevent the creation of a security group inbound rule that includes 0.0.0.0/0 as the source, it only allows for the ingress action on non-0.0.0.0/0 IPs. Option D is the best solution as it denies the ec2:AuthorizeSecurityGroupIngress action when the value of the aws:SourceIp condition key is 0.0.0.0/0. This will prevent the creation of any security group inbound rule that includes 0.0.0.0/0 as the source.
upvoted 6 times
MikelH93
1 year, 3 months ago
the answer can't be C or D because aws:SourceIp condition key don't exist with SCP. So answer is A
upvoted 2 times
mifune
4 months ago
You mean something like this? It's from the AWS portal... { "Version": "2012-10-17", "Statement": { "Effect": "Deny", "Action": "*", "Resource": "*", "Condition": { "NotIpAddress": { "aws:SourceIp": [ "192.0.2.0/24", "203.0.113.0/24" ] } } } }
upvoted 1 times
...
b3llman
1 year ago
have you actually tested it? if you haven't, please do it and then comment.
upvoted 3 times
...
...
aokaddaoc
9 months, 1 week ago
I think the reason why C is wrong is not because C does not meet the requirement but simply because it is too strong: All users can do is to set ingress rule in SG and all other actions are all blocked. Both C and D results the same which users can no longer able to open port to 0.0.0.0/0, but D is more precise without blocking other actions.
upvoted 1 times
...
...
...
Maria2023
Highly Voted 1 year, 2 months ago
Selected Answer: D
I literally just created the SCP and it works. I saw some comments that "ec2:AuthorizeSecurityGroupIngress action doesn't have any conditions" - that is not correct. This is my scp : { "Sid": "Statement1", "Effect": "Deny", "Action": [ "ec2:AuthorizeSecurityGroupIngress" ], "Resource": [ "*" ], "Condition": { "IpAddress": { "aws:SourceIp": [ "0.0.0.0/0" ] } } }
upvoted 31 times
b3llman
1 year ago
Tested and confirmed!
upvoted 5 times
...
I guess proving D works doesn't show C is incorrect. I feel that both C and D could be correct because as CuteRunRun mentioned, the SCP deny is default. Just have one more question, what is the ec2:AuthorizeSecurityGroupIngress if the SourceIp is not 0.0.0.0/0?
upvoted 1 times
vn_thanhtung
12 months ago
For all accounts in the NonProd OU, the security team needs to remove the ability to create a security group inbound rule that includes 0.0.0.0/0 as the source. you think C can "remove the ability to create" carry ? SCP allow all by default?
upvoted 1 times
vn_thanhtung
12 months ago
Sorry typo. you think C can "remove the ability to create" crazy ? SCP allow all by default
upvoted 1 times
...
...
...
longns
11 months ago
This will deny all action create a inbound rule not only Inbound rule which have source ip "0.0.0.0/0"
upvoted 3 times
Malcnorth59
3 months ago
I think that is incorrect. the SCP action is ec2:AuthorizeSecurityGroupIngress and specifically applies to ingress
upvoted 1 times
...
...
...
MAZIADI
Most Recent 2 weeks ago
Selected Answer: D
Why Option D is Better than Option C: Explicit Deny vs. Implicit Allow: Option C allows the action unless the aws:SourceIp is 0.0.0.0/0. This creates an implicit allow policy, which means that if any condition is not met, the action is allowed. Option D uses an explicit deny, which is more secure and straightforward. An explicit deny ensures that if the condition is met (aws:SourceIp is 0.0.0.0/0), the action is blocked regardless of other permissions.
upvoted 1 times
...
asquared16
1 month, 2 weeks ago
Selected Answer: A
It's A. Definitely A. Don't get confused.
upvoted 1 times
...
dzidis
1 month, 4 weeks ago
Voting for A
upvoted 1 times
...
teo2157
2 months, 4 weeks ago
Selected Answer: A
It's A, D is incorrect as it shouldn´t be source IP but destination address
upvoted 1 times
...
Malcnorth59
3 months ago
Selected Answer: D
Option D
upvoted 1 times
...
sse69
3 months, 2 weeks ago
Selected Answer: A
SourceIP is for requester IP address, not the CIDR referenced in the SG rule.
upvoted 3 times
...
Smart
4 months ago
A (Incorrect): SG is created for a briefly. This goes against the question requirement of "remove the ability to create a security group inbound rule..." B (Incorrect): Regardless of rule, SGs can be created and remain non-complaint. C (Incorrect): See D D (Incorrect): SourceIP condition key of IAM policy is the requestor's IP address. This has nothing to do with SG's inbound rule's sourceIP. This won't allow creating any SG inbound rules when the requestor is making AWS API calls from anywhere (0.0.0.0/0). Just a crap question and choices.
upvoted 2 times
...
mav3r1ck
5 months ago
Selected Answer: D
The goal is to prevent the creation of Amazon EC2 security group inbound rules that include 0.0.0.0/0 as the source for all accounts in the NonProd Organizational Unit (OU) with the least operational overhead. Option D is the most straightforward and effective solution to meet the requirement with the least operational overhead. By configuring a Service Control Policy (SCP) to deny the ec2:AuthorizeSecurityGroupIngress action when the aws:SourceIp condition key is 0.0.0.0/0 and applying this policy to the NonProd OU, the company can ensure that no account within this OU can create security group inbound rules that expose resources to the entire internet. This approach leverages AWS Organizations' capability to apply governance and compliance policies at scale, thereby reducing the need for individual resource monitoring or post-creation remediation.
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: D
D is going to avoid to create the rule. A is not going to prevent, is going to remediate it...
upvoted 1 times
...
Dgix
5 months, 3 weeks ago
A is out because creation of the SG is allowed albeit briefly before being updated B is noise C is out because SCPs don't allow D is the correct answer
upvoted 2 times
...
Dafukubai
6 months, 1 week ago
Selected Answer: A
To everyone who claimed tested D, plz try create inbound rules other than 0.0.0.0/0. D will deny all AuthorizeSecurityGroupIngress operation from your IP. that's why D is "worked"
upvoted 3 times
...
8608f25
6 months, 2 weeks ago
Selected Answer: D
Option D is the most direct and efficient solution. By creating an SCP that explicitly denies the ec2:AuthorizeSecurityGroupIngress action when the source IP is 0.0.0.0/0, it prevents users in all accounts under the NonProd OU from creating such open security group rules. This enforcement happens at the API level, blocking the action before the rule is created, which aligns with the goal of reducing operational overhead and proactively enforcing security best practices. It is not option C because, Option C mentions configuring a Service Control Policy (SCP) to allow the ec2:AuthorizeSecurityGroupIngress action except when the source IP is 0.0.0.0/0. While the intention is correct, SCPs do not support allow-listing in this manner; they are designed to explicitly allow or deny actions across accounts in an AWS Organization.
upvoted 2 times
...
LazyAutonomy
6 months, 3 weeks ago
Selected Answer: A
Read the most recent comments to understand why it isn't B, C or D.
upvoted 1 times
...
Vaibs099
6 months, 3 weeks ago
It has to be A, In option C and D, aws:SourceIp Use this key to compare the requester's IP address with the IP address that you specify in the policy. This is the condition not available for ec2:AuthorizeSecurityGroupIngress. It is condition to be used with Properties of the network. Option is B is just a config rule for unauthorized port. Only A can remove ingress rule out of these options. Below confirming this condition is not available for ec2:AuthorizeSecurityGroupIngress https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AuthorizeSecurityGroupIngress.html Below confirming use of aws:SourceIp - https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceip
upvoted 1 times
...
gustori99
7 months ago
Selected Answer: A
Everybody who voted D. Just test it yourself and you will see that it does not work. Pleas understand the meaning of aws:sourceIp. From the AWS documentation: "The aws:SourceIp condition key resolves to the IP address that the request originates from". The aws:sourceIp condition checks the IP address of the requestor and has nothing to do with the security group sourceIp configuration. The comment from Maria2023 who claims to have tested it is wrong because her suggested SCP denies all inbound rule creation even if you try to configure a specific IP address in the inbound rule. Although I disagree with the wording from option A "Deploy the updated rule to the NonProd OU", A is the only possible answer.
upvoted 4 times
...
master9
7 months ago
Selected Answer: C
"C" is the right answer as in the statement it is written "NOT" which will revert the allow condition. "Configure an SCP to allow the ec2:AuthorizeSecurityGroupIngress action when the value of the aws:SourceIp condition key is not 0.0.0.0/0. Apply the SCP to the NonProd OU".
upvoted 1 times
...
ninomfr64
8 months ago
Selected Answer: A
Not B - the vpc-sg-open-only-to-authorized-ports AWS Config managed rule checks if security groups allowing unrestricted incoming traffic ('0.0.0.0/0' or '::/0') only allow inbound TCP or UDP connections on authorized ports. The rule is NON_COMPLIANT if such security groups do not have ports specified in the rule parameters. The scenario is about unrestricted ip address and does not about port. Not C and D - aws:SourceIp key is used to compare the API client requester's IP address with the IP address that you specify in the policy. The aws:SourceIp condition key can only be used for public IP address ranges. Thus A is the right answer as it does the job (even if it requires some work)
upvoted 3 times
...
ayadmawla
8 months, 2 weeks ago
Selected Answer: D
"remove the ability to create" - is not the same as removing an SG after it has been created.
upvoted 4 times
...
shaaam80
8 months, 3 weeks ago
Selected Answer: D
Answer D. Regarding A, isn't it a reactive approach?v
upvoted 1 times
...
edder
9 months ago
Selected Answer: A
The correct answer is A. I actually tried it and verified it. B: Unsuitable because it controls TCP or UDP connections. C,D: Even after applying the created SCP, the default SCP FullAWSAccess is still applied, so rules can be created. Even if you delete FullAWSAccess, you will not be able to access the security group with an implicit Deny. A: This is the answer by process of elimination.
upvoted 1 times
...
jainparag1
9 months ago
Selected Answer: A
I believe they are asking for a reactive approach here. They are allowing it to happen and at the same time handling it along with notification. Either C or D won't allow it to happen in the first place.
upvoted 1 times
...
NOZOMI
9 months, 1 week ago
Choosing d in this problem is evidence of underestimating IAM on a regular basis. It is not befitting of a specialist. The condition key in d indicates the source IP of the API, and is not related to the control of security groups.
upvoted 2 times
...
kalitwol
9 months, 2 weeks ago
I think its A because both C and D reference condition aws:SourceIp which refers to the IP address of client making an API call to an AWS service not the contents of the API call https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html aws:SourceIp Works with IP address operators. Use this key to compare the requester's IP address with the IP address that you specify in the policy. The aws:SourceIp condition key can only be used for public IP address ranges. Availability – This key is included in the request context, except when the requester uses a VPC endpoint to make the request. Value type – Single-valued The aws:SourceIp condition key can be used in a policy to allow principals to make requests only from within a specified IP range. However, this policy denies access if an AWS service makes calls on the principal's behalf
upvoted 2 times
...
severlight
9 months, 2 weeks ago
Selected Answer: D
already created SCPs aren't mentioned, hence we assume we have default SCP, hence C won't work and we should choose D.
upvoted 1 times
...
whenthan
10 months, 1 week ago
Selected Answer: D
using SCPs to deny a service or action permissions
upvoted 1 times
...
Passexam4sure_com
10 months, 2 weeks ago
Selected Answer: D
Configure an SCP to deny the ec2:AuthorizeSecurityGroupIngress action when the value of the aws: Source helps condition key is 0.0.0.0/0. Apply the SCP to the NonProd OU.
upvoted 1 times
...
Certified101
10 months, 2 weeks ago
Selected Answer: D
D is correct A states to "apply the rules to the NONProd OU" - how? what rules? is this an SCP or Config? doesnt state clearly.
upvoted 1 times
...
rlf
10 months, 3 weeks ago
A.https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AuthorizeSecurityGroupIngress.html
upvoted 1 times
rlf
10 months, 3 weeks ago
We need to understand the meaning of "aws:SourceIp". https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html https://aws.amazon.com/ko/blogs/security/how-to-automatically-revert-and-receive-notifications-about-changes-to-your-amazon-vpc-security-groups/
upvoted 1 times
...
...
M4D3V1L
10 months, 3 weeks ago
Selected Answer: A
It's A since it uses already a eventBridge rule, also the solution is present in the AWS Documentation.
upvoted 2 times
...
longns
11 months ago
Selected Answer: A
It never D This is a tricky question. Because it requires the reader to pay attention to the detail aws:SourceIp. You will almost certainly get it wrong if you do not understand the exact meaning of this keyword. Even when some people have tested it in practice and found it works, it's because setting aws:SourceIp = 0.0.0.0/0 will Deny all IPs creating an Inbound Rule. (Tested) If method D is used, the correct keyword should be another
upvoted 3 times
...
Piccaso
1 year ago
Selected Answer: D
A is not reliable. D is supported.
upvoted 1 times
...
venvig
1 year ago
Selected Answer: A
Option D is NOT correct. There is no documented Condition named SourceIp for ec2:AuthorizeSecurityGroupIngress Refer https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonec2.html
upvoted 2 times
longns
11 months ago
SourceIp is global key in AWS context https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html But it not meaning D correct cuz it is used for validation ip of operator
upvoted 1 times
...
...
allen_devops
1 year ago
The correct answer is A. For C/D, the condition aws:SourceIp is to check the requester's IP instead of the ingress rules's IP.
upvoted 2 times
...
xplusfb
1 year ago
Selected Answer: D
Question always says the keynote. Keynote is LEAST operational overhead. And we already using AWS Organizations so its 100 percent works D
upvoted 1 times
...
punkbuster
1 year ago
Selected Answer: A
Answer is A NOT D - The "aws:SourceIp" condition key picks the IP Addr of the requester not the IP address being passed into the Security Group. I would suggest, log into AWS account and try it out for yourself by changing the source of the ingress rule.
upvoted 3 times
...
CuteRunRun
1 year ago
Selected Answer: C
I think the default policy in scp is deny, you need to create a explicit allow policy
upvoted 2 times
...
CuteRunRun
1 year ago
I think the default policy of SCP is deny, you need to create a explicit allow rule. So I select C
upvoted 1 times
...
MRL110
1 year, 1 month ago
Selected Answer: D
SCP only allows condition key in deny statements: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_syntax.html#scp-syntax-condition
upvoted 2 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: A
D would be nice if was suported by SCP
upvoted 2 times
NikkyDicky
1 year, 1 month ago
D - actually was able to create that SCP and attach to member acct, but it didn't stop me from createing an SG with 0000/0 sourceIp ...
upvoted 1 times
...
...
SmileyCloud
1 year, 1 month ago
Selected Answer: D
It's D. I just tested it. This is the error that I am getting when I tried to create a sec group with 0.0.0.0/0 as source. "You may be missing IAM policies that allow AuthorizeSecurityGroupIngress. You are not authorized to perform this operation. Encoded authorization failure message: <some giberish> And this is the policy: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Deny", "Action": [ "ec2:AuthorizeSecurityGroupIngress" ], "Resource": [ "*" ], "Condition": { "IpAddress": { "aws:SourceIp": "0.0.0.0/0" } } } ] }
upvoted 5 times
...
phongpg
1 year, 2 months ago
Selected Answer: A
Correct answer is C. Its can't be option D, if you look at the IAM documentation here, you can see that the ec2:AuthorizeSecurityGroupIngress action doesn't have any conditions that would allow you to specify the ip addresses in the inbound/outbound rules. https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonec2.html
upvoted 2 times
phongpg
1 year, 2 months ago
Sorry answer is A, not C/D
upvoted 1 times
...
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: A
: A This is a really hard question cuz it really baits you with the SCP which would make a lot of sense here. Unfortunately that condition is not the correct one
upvoted 1 times
...
Roontha
1 year, 2 months ago
Answer : A Bases on AWS demo on following use case https://aws.amazon.com/blogs/security/how-to-automatically-revert-and-receive-notifications-about-changes-to-your-amazon-vpc-security-groups/
upvoted 1 times
...
Rajivjain
1 year, 2 months ago
Selected Answer: D
SCP support aws: SourceIp condition key > Check point "e" carefully under "Creating an SCP" https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_create.html
upvoted 1 times
Rajivjain
1 year, 2 months ago
{ "Version": "2012-10-17", "Statement": [ { "Sid": "DenyIngressFromAnyIp", "Effect": "Deny", "Action": "ec2:AuthorizeSecurityGroupIngress", "Resource": "*", "Condition": { "StringEquals": { "aws:SourceIp": "0.0.0.0/0" } } } ] }
upvoted 1 times
...
...
Darkhorse_79
1 year, 2 months ago
Selected Answer: D
Requirement is "Remove the ability to create a security group "
upvoted 4 times
...
mKrishna
1 year, 3 months ago
Ans is B refer https://docs.aws.amazon.com/whitepapers/latest/building-a-data-perimeter-on-aws/appendix-3-service-control-policy-examples.html
upvoted 1 times
...
ShinLi
1 year, 3 months ago
Selected Answer: D
Agree with D, as It is asking to stop/remove the all 0 permissions. modify Lambda function will not work. As it only removed SNS notification
upvoted 1 times
...
manawey
1 year, 3 months ago
Selected Answer: A
A is correct, in support for RunkieMax's experience below (comment below). I always take advantage when question already gave EvenBridge and SNS B: does not restrict NonProd developers D (and C): AWS disagrees. https://repost.aws/questions/QUozrofOc6SEastgpFp6IJMQ/blocking-sg-rule https://security.stackexchange.com/questions/261108/scp-to-create-security-groups-in-member-aws-account
upvoted 3 times
...
Jesuisleon
1 year, 3 months ago
Selected Answer: D
The correct answer is D. The key point is " remove the ability to create a security group inbound rule " not the ability to remove a existing security group. For A, it's clear to refer to the ability to remove a already existing security group. SO it's wrong. For D. I have noticed the comments referred to this address https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AuthorizeSecurityGroupIngress.html. For thinking D is wrong based on the address above,Guys do you really read this carefully ? How can a people to allow an 0.0.0.0 inbound for an EC2 instance ? firstly he should add an inbound rule that adding 0.0.0.0/0 to this security group. D is preventing the people from doing this because security group is default declining anything unless you specify rule.
upvoted 1 times
Jesuisleon
1 year, 2 months ago
I changed my answer to A after reading the links supplied from manawey.
upvoted 2 times
...
...
RunkieMax
1 year, 3 months ago
Selected Answer: A
We used that technic a long time ago and we deployed the solution with cloudformation stackset in all our account in our OU. It works fine four us
upvoted 2 times
...
gameoflove
1 year, 3 months ago
Selected Answer: C
I suggest as per my experience with AWS, Better to allow restrict condition then put deny condition
upvoted 2 times
E1234
1 year, 3 months ago
In a SCP, allow condition almost do nothing.
upvoted 1 times
...
...
mattlai
1 year, 3 months ago
terrible q&a from aws once again
upvoted 1 times
...
chiaseed
1 year, 3 months ago
Selected Answer: A
I first thought the answer is D but seems like A is correct. As Damijo said, "ec2:AuthorizeSecurityGroupIngress action doesn't have any conditions that would allow you to specify the ip addresses in the inbound/outbound rules." https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AuthorizeSecurityGroupIngress.html
upvoted 1 times
dkx
1 year, 2 months ago
Be sure not to confuse SCP policy statements and resource API methods. Also note that "aws:SourceIp" is a global condition context key -- used to compare the requester's IP address with the IP address that you specify in the policy. Thus, the answer is D
upvoted 1 times
...
...
mrfretz
1 year, 3 months ago
Selected Answer: A
: A This is a really hard question cuz it really baits you with the SCP which would make a lot of sense here. Unfortunately that condition is not the correct one
upvoted 2 times
...
petervu
1 year, 3 months ago
Selected Answer: A
A is correct
upvoted 1 times
...
petervu
1 year, 3 months ago
Looks like A is correct.
upvoted 1 times
...
rbm2023
1 year, 3 months ago
Selected Answer: D
The question is about "removing the ability to create" and not take action after the security group was created. This needs to be done in the service policy to DENY the action. Hence option D
upvoted 3 times
...
dimeder
1 year, 4 months ago
Selected Answer: A
Just to increase the percentage of A.
upvoted 2 times
...
DWsk
1 year, 4 months ago
Selected Answer: A
This is a really hard question cuz it really baits you with the SCP which would make a lot of sense here. Unfortunately that condition is not the correct one
upvoted 3 times
...
Cccb35
1 year, 4 months ago
Selected Answer: A
Just to increase the percentage of A.
upvoted 2 times
...
rxhan
1 year, 4 months ago
Selected Answer: D
Option D is the most suitable solution to meet the requirement with the least operational overhead. An SCP (Service Control Policy) can be used to set organization-wide policies for AWS accounts in the organization, including the NonProd OU. We need to deny the ec2:AuthorizeSecurityGroupIngress action when the value of the aws:SourceIp condition key is 0.0.0.0/0 to prevent creating security group inbound rules with this source.
upvoted 2 times
...
Sarutobi
1 year, 4 months ago
Selected Answer: A
Just to increase the percentage of A.
upvoted 2 times
...
Anonymous9999
1 year, 4 months ago
Selected Answer: A
"aws:SourceIp" has nothing to do with Inbound rules in Security Groups. This is actually the source IP of the agent calling the EC2 API to modify a SG rule, which has nothing to do with the 'Source' field in a SG Inbound rule. https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceip
upvoted 5 times
...
frfavoreto
1 year, 4 months ago
Selected Answer: A
"aws:SourceIp" has nothing to do with Inbound rules in Security Groups. This is actually the source IP of the agent calling the EC2 API to modify a SG rule, which has nothing to do with the 'Source' field in a SG Inbound rule.
upvoted 4 times
...
birbyne
1 year, 4 months ago
D: { "Version": "2012-10-17", "Statement": [ { "Sid": "DenyAllOpenPorts", "Effect": "Deny", "Action": [ "ec2:AuthorizeSecurityGroupIngress" ], "Resource": "*", "Condition": { "IpAddress": { "aws:SourceIp": "0.0.0.0/0" } } } ] }
upvoted 4 times
...
mfsec
1 year, 5 months ago
Selected Answer: A
As Damijo said from the docs.
upvoted 1 times
...
Arnaud92
1 year, 5 months ago
The "aws:SourceIp" is used for restrict access to AWS from user which have specific IP specified in aws:SourceIp. This is not a condition for source ip in a SG
upvoted 3 times
Arnaud92
1 year, 5 months ago
So it cannot be D for sure
upvoted 2 times
Arnaud92
1 year, 5 months ago
C is not true for the same (and say allow...) B is not true because it's partial, the rule will be flag as NON COMPLIANT but will not be delete without using System Manager automation document A is true and does not add a lot of operational overhead because there is already an eventbridge rule for that
upvoted 2 times
...
...
...
Damijo
1 year, 5 months ago
Selected Answer: A
If you look at the IAM documentation here, you can see that the ec2:AuthorizeSecurityGroupIngress action doesn't have any conditions that would allow you to specify the ip addresses in the inbound/outbound rules.https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonec2.html
upvoted 4 times
...
ramyaram
1 year, 5 months ago
Selected Answer: D
D would be the best option to meet operational overhead requirement
upvoted 2 times
...
taer
1 year, 5 months ago
Selected Answer: D
D is correct
upvoted 2 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: D
D is the LEAST operational overhead solution
upvoted 2 times
dev112233xx
1 year, 5 months ago
Changing my answer to A Well.. after investigating I found out that it’s not possible to prevent security changes with SCP
upvoted 1 times
...
...
zejou1
1 year, 5 months ago
Selected Answer: A
C and D are out; for security groups you cannot do a deny, only allow so D is out and C is out because you cant do a "is not" since that is still a deny - https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules.html B. out because that AWS Config managed rule is detective only, not proactive, go ahead and review the list for different evaluation modes: https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-evaluation-mode.html This is a legit "trick" question, you have to modify the rule to invoke an AWS Lambda to always remove it. All the other stuff in this statement is to through you off - you must use EventBridge to create a rule.
upvoted 4 times
...
vherman
1 year, 5 months ago
Selected Answer: D
D meets the requirements
upvoted 2 times
vherman
1 year, 5 months ago
later I found that SourceIP is the IP address of a requester. So D isn't correct !!!
upvoted 2 times
...
...
kiran15789
1 year, 5 months ago
Selected Answer: D
creating a lambda and removing it seems weird and definatly lot of operation overhead. will go with D
upvoted 2 times
...
rtgfdv3
1 year, 5 months ago
Selected Answer: A
https://aws.amazon.com/blogs/security/how-to-automatically-revert-and-receive-notifications-about-changes-to-your-amazon-vpc-security-groups/
upvoted 4 times
...
lkyixoayffasdrlaqd
1 year, 5 months ago
I don't understand people that says D, can you tell me what is the differences between C and D?
upvoted 1 times
...
lkyixoayffasdrlaqd
1 year, 6 months ago
Selected Answer: B
Answer should be B; The solution that will meet the requirement with the LEAST operational overhead is option B: Add the vpc-sg-open-only-to-authorized-ports AWS Config managed rule to the NonProd OU. This option is the least operational overhead because it utilizes an existing AWS Config managed rule, which means that there is no need to create or deploy any new resources or code. The vpc-sg-open-only-to-authorized-ports rule will automatically evaluate all security groups in the NonProd OU and report any that allow inbound traffic from 0.0.0.0/0. This rule will also allow security groups to be created or updated with any other source IP address. Option A requires the creation and deployment of a Lambda function, which will require additional operational overhead. Option C requires the configuration of an SCP, which can be complex and may cause unintended consequences if not configured properly. Option D is similar to Option C but uses a deny policy instead of an allow policy, which can be more difficult to manage and troubleshoot.
upvoted 2 times
lkyixoayffasdrlaqd
1 year, 6 months ago
Here is the link: https://docs.aws.amazon.com/config/latest/developerguide/vpc-sg-open-only-to-authorized-ports.html
upvoted 1 times
Sarutobi
1 year, 6 months ago
But does it act upon it or just marked as non-compliance?
upvoted 2 times
anita_student
1 year, 6 months ago
Even if it acts upon it and deletes the rule it didn’t stop developers to create the rule in the first place, hence doesn’t meet the criteria
upvoted 3 times
...
...
...
...
God_Is_Love
1 year, 6 months ago
D is correct. Refer SCP usage strategies- https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_strategies.html In AWS Organizations, FullAWSAccess SCP is by default added and applied to all OUs/member accounts.So, an allow is already there, so we just need to add a deny and apply to NonProd OU For C to be answer, we need to do additional step of adding deny rule for all OUs and member accounts which is tedious and against least operational overhead. that is the whole reason FullAWSAccess is added by default on AWS Organizations.
upvoted 2 times
...
Nidjo
1 year, 6 months ago
Answer is A, the conditions aws:SourceIP don't exist for this API call.
upvoted 1 times
...
kiran15789
1 year, 6 months ago
Selected Answer: D
option D as it has least operational overhead
upvoted 3 times
...
c73bf38
1 year, 6 months ago
Selected Answer: D
D. Configure an SCP to deny the ec2:AuthorizeSecurityGroupIngress action when the value of the aws:SourceIp condition key is 0.0.0.0/0. Apply the SCP to the NonProd OU. This solution leverages AWS Organizations' Service Control Policies (SCPs) to deny the ec2:AuthorizeSecurityGroupIngress action when the source IP is 0.0.0.0/0. This means that any attempt to create a security group inbound rule with that source IP will be blocked at the organizational level, without the need for any additional resources or configurations in individual accounts. This approach has the least operational overhead as it requires only the configuration of an SCP in the NonProd OU, which can be easily managed and updated.
upvoted 3 times
...
spd
1 year, 6 months ago
Selected Answer: A
A - "remove the ability to create a security group inbound rule that includes 0.0.0.0/0 as the source" - Here requirement is to remove the rule and Option D is not allowing to create with SOurceIp..so it should be A
upvoted 1 times
Pete697989
1 year, 5 months ago
It says "remove the ability to create rule" and no "remove the rule after its created"
upvoted 1 times
...
...
c73bf38
1 year, 6 months ago
Selected Answer: A
A. Modify the EventBridge rule to invoke an AWS Lambda function to remove the security group inbound rule and to publish to the SNS topic. Deploy the updated rule to the NonProd OU would be the best option for removing the ability to create a security group inbound rule that includes 0.0.0.0/0 as the source with the least operational overhead. This solution allows the security team to remove the inbound rule that includes 0.0.0.0/0 as the source when the event occurs, reducing the need for manual intervention.
upvoted 1 times
c73bf38
1 year, 6 months ago
NVMD; needs to remove the ability. So D is the correct answer. The solution that will meet this requirement with the LEAST operational overhead to remove the ability to create is Option D, Configure an SCP to deny the ec2:AuthorizeSecurityGroupIngress action when the value of the aws:SourceIp condition key is 0.0.0.0/0. Apply the SCP to the NonProd OU. This solution will prevent the creation of security group inbound rules that include 0.0.0.0/0 as the source, without any need to modify the EventBridge rule or the AWS Config settings.
upvoted 1 times
...
...
[Removed]
1 year, 6 months ago
Selected Answer: D
surely this is 100% D, the questions is allow to prevent the creation of the rule. Not how to remediate the existence of such rules. AWS organisaitons is already setup so the overhead is minimal to apply an SCP to the non-prod OU. { "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Action": [ "ec2:AuthorizeSecurityGroupIngress" ], "Resource": "*", "Condition": { "StringEquals": { "ec2:sourceIp": "0.0.0.0/0" } } } ] }
upvoted 3 times
...
moota
1 year, 6 months ago
Selected Answer: A
According to ChatGPT, In AWS, the aws:SourceIp condition key represents the source IP address of a request. The value of aws:SourceIp is determined by AWS and is based on the IP address of the client that made the request. For example, if a user makes a request to an AWS service, the IP address of the user's computer or device would be used as the value of aws:SourceIp in the request.
upvoted 3 times
...
snani10
1 year, 6 months ago
Selected Answer: A
I don't think aws:SourceIp is the input of the security group, it is the IP of the user who is updating the security group. https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny-ip.html
upvoted 4 times
...
jooncco
1 year, 6 months ago
Selected Answer: A
A is correct. The values of AWS resource CANNOT be controlled by policies.
upvoted 1 times
oatif
1 year, 6 months ago
A requires too much work, SCP is like a guardrail, it removes the ability for admins to assign certain permissions to users or services. Regarding your point, it is referring to the API caller's Public IP - but look at the IP is 0.0.0.0/0 it is open to the entire internet, anybody can reach NonProd resource.
upvoted 1 times
...
...
Musk
1 year, 6 months ago
Selected Answer: A
Voting for A as per daybey's comment
upvoted 2 times
...
daybey
1 year, 7 months ago
Selected Answer: A
I would go for A. Not B: vpc-sg-open-only-to-authorized-ports does not exist. Not C & D: SCP's can explicitly deny ec2:AuthorizeSecurityGroupIngress rule however aws:SourceIp key is not refering to the value of the ingress rule, but it is referring to the API caller's own Public IP. See this please: https://stackoverflow.com/a/61243672
upvoted 4 times
Musk
1 year, 6 months ago
Very well seen that aws:SourceIp refers to the one creating the SG, not the SG allowed IP addresses.
upvoted 1 times
...
...
Question #45 Topic 1

A company hosts a Git repository in an on-premises data center. The company uses webhooks to invoke functionality that runs in the AWS Cloud. The company hosts the webhook logic on a set of Amazon EC2 instances in an Auto Scaling group that the company set as a target for an Application Load Balancer (ALB). The Git server calls the ALB for the configured webhooks. The company wants to move the solution to a serverless architecture.

Which solution will meet these requirements with the LEAST operational overhead?

  • A. For each webhook, create and configure an AWS Lambda function URL. Update the Git servers to call the individual Lambda function URLs.
  • B. Create an Amazon API Gateway HTTP API. Implement each webhook logic in a separate AWS Lambda function. Update the Git servers to call the API Gateway endpoint.
  • C. Deploy the webhook logic to AWS App Runner. Create an ALB, and set App Runner as the target. Update the Git servers to call the ALB endpoint.
  • D. Containerize the webhook logic. Create an Amazon Elastic Container Service (Amazon ECS) cluster, and run the webhook logic in AWS Fargate. Create an Amazon API Gateway REST API, and set Fargate as the target. Update the Git servers to call the API Gateway endpoint.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
B (75%)
A (16%)
10%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: B
B. Create an Amazon API Gateway HTTP API. Implement each webhook logic in a separate AWS Lambda function. Update the Git servers to call the API Gateway endpoint. This solution will provide low operational overhead as it utilizes the serverless capabilities of AWS Lambda and API Gateway, which automatically scales and manages the underlying infrastructure and resources. It also allows for the webhook logic to be easily managed and updated through the API Gateway interface. The answer should be B because it is the best solution in terms of operational overhead.
upvoted 22 times
masetromain
1 year, 7 months ago
Option A would require updating the Git servers to call individual Lambda function URLs for each webhook, which would be more complex and time-consuming than calling a single API Gateway endpoint. Option C would require deploying the webhook logic to AWS App Runner, which would also be more complex and time-consuming than using an API Gateway. Option D would also require containerizing the webhook logic and creating an ECS cluster and Fargate, which would also add complexity and operational overhead compared to using an API Gateway.
upvoted 8 times
hobokabobo
1 year, 6 months ago
I do agree with B. However on Git server side it does make no difference if one calls aws or do a rest call via gateway. Eg. if you use Python it makes no difference if you use boto(call Lambda) or request(rest api) module. If one implemets via shell it makes no difference if one uses aws-cli(invoke Lambda directly) or curl(do a rest call). Similar for other implementations.
upvoted 2 times
hobokabobo
1 year, 6 months ago
As addition why B is still better: it hides the implementation details and decouples by introducing a interface. With that a team for Aws may change what ever it needs to change to implement the interface. On the other hand on git side can use whatever deems necessary without caring about implementation details.
upvoted 2 times
...
...
...
...
ninomfr64
Highly Voted 8 months ago
Selected Answer: A
I need help here: what's wrong with Lambda Function URL? With A I just need to handle my Lambda functions, updates go trough updating my aliases pointing to a new version. Here I am just missing all the capabilities provided by API Gateway that seems not to be requested (transformations, throttling, quotas, cache, api keys, auth, OpenAPI, ...). With B I still need to implement each webhook logic in a separate AWS Lambda function and update git server + I need to operates API Gateway. Any other option requires 2 or more services thus generating more operations, also: Not C as app runner is not a target for ALB (private IP, ECS, EC2 instance, Lambda) Not D as you cannot set Fargate as API Gateway target (while you can use ECS as target) Can you help me understand why B requires less operations overhead?
upvoted 6 times
Malcnorth59
3 months ago
Option A requires that you update the webhooks for each lambda function. This will create a considerable operational overhead not just for the initial change but going forward as well. API Gateway (B) decouple the functions from the Webhooks.
upvoted 2 times
...
...
subbupro
Most Recent 2 days, 7 hours ago
C is the best one . Operational over head - exisiting web logic needs to be change into the lambda. But in C - just we can use the same logic just deployment activities. Please go with C
upvoted 1 times
...
Malcnorth59
3 months ago
Selected Answer: B
A: large operational overhead B: Choice C: App runner doesn't use ALB D: Unnecessary complexity with containers
upvoted 2 times
...
Fu7ed
3 months, 2 weeks ago
https://aws.amazon.com/ko/solutions/implementations/git-to-s3-using-webhooks/
upvoted 2 times
Fu7ed
3 months, 2 weeks ago
choose B
upvoted 1 times
...
...
kz407
5 months, 1 week ago
Selected Answer: B
Given the current answers, I think B is the only possible option with least overhead. C would have been a better candidate over B, if it mentioned to include the App Runner in a Target Group TG and assign TG as the target for the API Gateway. As it stands now, C is not correct because App Runner app can't be directly assigned as a target for API Gateway.
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: A
A, because is the solution with less operational overhead. Also option B also will create new lambda functions per webhook, and you have to define the specific path in the apigateway and integrate it with your specific lambda...
upvoted 4 times
...
bjexamprep
5 months, 3 weeks ago
Selected Answer: B
Lambda function is the easiest way to implement the webhook logic. App Runner and ECS all requires more ops overhead. So the answer is between A and B. Someone argue that using A introduces ops overhead of mapping every Lambda function to the webhooks, but actually with B, users don’t need to map Lambda function in git webhooks, but move the Lambda function mapping ops to API gateway. The mapping need to be done, that’s an ops overhead that cannot be ignored. I’m guessing the question designer prefers to use API GW, because the description “Update the Git servers to call the individual Lambda function URLs.” doesn’t look good. While, in reality, the repo developers create the Lambda function, and they know the URL, it’s very easy to launch the Lambda function from the web hook. No additional API GW is required.
upvoted 1 times
...
master9
7 months ago
Selected Answer: C
You can set App Runner as a target for ALB. AWS App Runner can use your code. You can use AWS App Runner to create and manage services based on two fundamentally different service sources: source code and source image. App Runner starts, runs, scales, and balances your service regardless of the source type. You can use the CI/CD capability of App Runner to track changes to your source image or code. When App Runner discovers a change, it automatically builds (for source code) and deploys the new version to your App Runner service
upvoted 1 times
djeong95
6 months ago
Looks like App Runner is built more for deploying web applications rather than hosting webhook logics.
upvoted 1 times
...
...
uas99
8 months ago
A. is the right answer as no need to introduce gateway here
upvoted 2 times
...
subbupro
8 months, 3 weeks ago
Least operations is the key. App runner is a aws managed one and can deploy it easily, A and B we need to create lamda for each web hook it is very complex . So C would be correct
upvoted 1 times
jpa8300
7 months, 4 weeks ago
ninomfr64 says that App runner cannot be a target for ALB, so that's the reason you cannot select C.
upvoted 2 times
...
...
severlight
9 months, 2 weeks ago
Selected Answer: B
Don't see the exact reasons to not choose A for now, but B will work for sure.
upvoted 1 times
severlight
9 months, 2 weeks ago
UPD: Don't see the exact reasons why A won't work for now, but B will work for sure.
upvoted 1 times
...
...
whenthan
10 months, 1 week ago
Selected Answer: B
reducing operational overhead!
upvoted 1 times
...
Andy97229
10 months, 2 weeks ago
Selected Answer: C
B vs C. Looking at App Runner C makes more sense.
upvoted 1 times
...
sam_cao
11 months, 1 week ago
Selected Answer: C
The comments below supported Option B are only focusing on how Lambda + API Gateway can help reduce operational overhead. Thinking of the scenario in the question that we have already had the source code, wouldn't it be easier if we only specify the code repo on App Runner and let it process and finish the task? Implement all logic again would consume a lot more time.
upvoted 1 times
SuperDuperPooperScooper
9 months, 3 weeks ago
Watch this video from AWS. At 4:05 he says Apprunner is serverless, and also there are no load balancers. Since the answer mentions load balancers it is incorrect. https://www.youtube.com/watch?v=HJsULvSJWes I found the video from this AWS post https://aws.amazon.com/blogs/containers/introducing-aws-app-runner/
upvoted 3 times
marcosallanluz
5 days ago
App Runner tem autoscaling é só configurar no ELB
upvoted 1 times
...
...
...
CuteRunRun
1 year ago
Selected Answer: B
I prefer B
upvoted 1 times
...
SmileyCloud
1 year, 1 month ago
Selected Answer: B
API GW and Lambda. Here is your architecture: https://aws.amazon.com/solutions/implementations/git-to-s3-using-webhooks/
upvoted 5 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: B
B makes sense
upvoted 1 times
...
emiliocb4
1 year, 2 months ago
Selected Answer: C
to accomplish the least operational requiment i will go with C. B seems to be too much disruptive to implement "each logic" in a separate lambda
upvoted 3 times
sam_cao
11 months, 1 week ago
I agree. We don't have any coding if we choose C.
upvoted 1 times
...
...
Sarutobi
1 year, 3 months ago
Interesting that there is no more debate here about option A. I still think B is the way to go because AWS recommends integrating with GitLab with https://aws-quickstart.github.io/quickstart-git2s3/ and that is what we use. But if option A works, it would be the "LEAST operational overhead." I think masetromain talked about it, but I see it differently, basically, it can be a single Lambda function that reads the payload of the webhook to continue the pipeline, basically the same idea but without API-GW in front.
upvoted 1 times
b3llman
1 year ago
Option A works for sure, but managing API gateway is easier than managing function URLs in every single lambda function.
upvoted 1 times
...
...
gameoflove
1 year, 3 months ago
Selected Answer: B
B, Is the best option as per the question
upvoted 1 times
...
RaghavendraPrakash
1 year, 4 months ago
I go with C. With the options, we have Lambda and AppRunner. We dont know if that functionality can be repurposed with Lambda. However, the functionality can be deployed with AppRunner with least Operational Overhead.
upvoted 4 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: B
B makes sense ✅
upvoted 3 times
...
moota
1 year, 6 months ago
Selected Answer: B
Here's what ChatGPT has to say. In general, if you're looking for the option with the least operational overhead and you're comfortable with a fully managed, serverless environment, then AWS Lambda with API Gateway may be the better choice. However, if you require more control over your environment or need to use containers, then AWS App Runner with ALB may be the better option.
upvoted 2 times
...
Untamables
1 year, 7 months ago
Selected Answer: B
https://aws.amazon.com/solutions/implementations/git-to-s3-using-webhooks/
upvoted 3 times
...
AjayD123
1 year, 7 months ago
Selected Answer: B
Api Gateway with Lambda https://medium.com/mindorks/building-webhook-is-easy-using-aws-lambda-and-api-gateway-56f5e5c3a596
upvoted 3 times
...
Question #46 Topic 1

A company is planning to migrate 1,000 on-premises servers to AWS. The servers run on several VMware clusters in the company’s data center. As part of the migration plan, the company wants to gather server metrics such as CPU details, RAM usage, operating system information, and running processes. The company then wants to query and analyze the data.

Which solution will meet these requirements?

  • A. Deploy and configure the AWS Agentless Discovery Connector virtual appliance on the on-premises hosts. Configure Data Exploration in AWS Migration Hub. Use AWS Glue to perform an ETL job against the data. Query the data by using Amazon S3 Select.
  • B. Export only the VM performance information from the on-premises hosts. Directly import the required data into AWS Migration Hub. Update any missing information in Migration Hub. Query the data by using Amazon QuickSight.
  • C. Create a script to automatically gather the server information from the on-premises hosts. Use the AWS CLI to run the put-resource-attributes command to store the detailed server data in AWS Migration Hub. Query the data directly in the Migration Hub console.
  • D. Deploy the AWS Application Discovery Agent to each on-premises server. Configure Data Exploration in AWS Migration Hub. Use Amazon Athena to run predefined queries against the data in Amazon S3.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
D (90%)
10%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: D
The correct answer is D: Deploy the AWS Application Discovery Agent to each on-premises server. Configure Data Exploration in AWS Migration Hub. Use Amazon Athena to run predefined queries against the data in Amazon S3. Here is why the other choices are not correct: A. Deploy and configure the AWS Agentless Discovery Connector virtual appliance on the on-premises hosts. Configure Data Exploration in AWS Migration Hub. Use AWS Glue to perform an ETL job against the data. Query the data by using Amazon S3 Select. - AWS Agentless Discovery Connector will help in discovering and inventory servers but it does not provide the same level of detailed metrics as the AWS Application Discovery Agent, it also does not cover process information.
upvoted 45 times
masetromain
1 year, 7 months ago
B. Export only the VM performance information from the on-premises hosts. Directly import the required data into AWS Migration Hub. Update any missing information in Migration Hub. Query the data by using Amazon QuickSight. - It does not cover process information and it's not the best way to collect the required data, it's not efficient and it might miss some important information. C. Create a script to automatically gather the server information from the on-premises hosts. Use the AWS CLI to run the put-resource-attributes command to store the detailed server data in AWS Migration Hub. Query the data directly in the Migration Hub console. - this solution might not be very reliable and it does not cover process information, also it does not provide a way to query and analyze the data.
upvoted 6 times
masetromain
1 year, 7 months ago
D. Deploy the AWS Application Discovery Agent to each on-premises server. Configure Data Exploration in AWS Migration Hub. Use Amazon Athena to run predefined queries against the data in Amazon S3. - This is the correct answer as it covers all the requirements mentioned in the question, it will allow collecting the detailed metrics, including process information and it provides a way to query and analyze the data using Amazon Athena.
upvoted 5 times
...
...
...
icassp
Highly Voted 1 year, 7 months ago
Selected Answer: D
Choosing between A and D. For A, how can S3 select query?
upvoted 6 times
oatif
1 year, 6 months ago
I think A is a better solution because the Agentless discovery connector is custom-made for the VMware environment. It will save us time and collect all the necessary data we need. Installing a Discovery agent in every server would be very time-consuming. S3 select allows simple select operations against your raw data. I don't think we need athena for
upvoted 3 times
djeong95
6 months ago
As written by jainparag1, S3 Select is definitely the wrong solution here. As you said, it only allows for very simple select operations. Athena is a better way to go once you have configured the Migration hub settings correctly.
upvoted 1 times
...
jainparag1
9 months ago
A is horrible. You can write only simple SQLs using S3 select. But here you need a sophisticated solution to query these special metrics. D is satisfying all the requirements.
upvoted 3 times
...
...
...
Jason666888
Most Recent 3 weeks, 1 day ago
Selected Answer: D
D for sure
upvoted 1 times
...
vip2
3 months, 1 week ago
Selected Answer: D
see https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html for VMs hosted on VMware, you can use both the Agentless Collector and Discovery Agent to perform discovery simultaneously. Agentless Collector captures system performance information and resource utilization for each VM running in the vCenter, regardless of what operating system is in use. However, it cannot “look inside” each of the VMs, and as such, cannot figure out what processes are running on each VM nor what network connections exist. Therefore, if you need this level of detail and want to take a closer look at some of your existing VMs in order to assist in planning your migration, you can install the Discovery Agent on an as-needed basis.
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: D
D is correct
upvoted 1 times
...
whichonce
6 months ago
Selected Answer: A
Definetely A https://docs.aws.amazon.com/application-discovery/latest/userguide/agentless-collector-data-collected-vmware.html Vmware supports agentless connector with AWS, and data can be imported ove Migration Hub
upvoted 1 times
...
8608f25
6 months, 2 weeks ago
Selected Answer: D
Option D is the most efficient and streamlined solution for the requirements. Deploying the AWS Application Discovery Agent on each on-premises server allows for detailed collection of server metrics, including CPU usage, RAM usage, operating system details, and running processes. By configuring Data Exploration in AWS Migration Hub, the collected data can be analyzed and queried effectively. Using Amazon Athena for querying enables powerful SQL-based exploration of the data stored in Amazon S3, offering a flexible and scalable way to analyze the migration readiness and planning data. It is not option C because, Option C involves creating a custom script to gather server information and using the AWS CLI to store data in AWS Migration Hub. While this approach could potentially work, it requires significant manual effort to develop, deploy, and maintain the scripts across 1,000 servers, which is not ideal for minimizing operational overhead.
upvoted 1 times
...
ninomfr64
8 months ago
Selected Answer: D
Not A - as AWS Agentless Discovery Connector does not provide processes visibility Not B - as Migration Hub Import functionality does not support process datahttps://docs.aws.amazon.com/cli/latest/reference/mgh/put-resource-attributes.html, also I do not see how to query with QuickSight as there is not direct integration with Migration Hub to my knowledge Not C - as it seems that put-resource-attributes command does not support process data https://docs.aws.amazon.com/cli/latest/reference/mgh/put-resource-attributes.html D is correct as Discovery Agent collects the required data including processes, Data Exploration in Migration Hub allows to use Amazon Athena and comes with pre-defined queries as well. https://docs.aws.amazon.com/application-discovery/latest/userguide/explore-data.html
upvoted 1 times
...
edder
9 months ago
Selected Answer: D
https://docs.aws.amazon.com/application-discovery/latest/userguide/explore-data.html
upvoted 1 times
...
punkbuster
1 year ago
Selected Answer: D
The agent-based collector can collect data related to running processes which is not available to the Agentless Collector. Check out for yourself in the FAQs: https://aws.amazon.com/application-discovery/faqs/
upvoted 1 times
...
xplusfb
1 year ago
Selected Answer: A
As far as i learned for VM based envs we can go with agentless. And we can use a OVA image via collect the metrics and so on. im going with A . https://docs.aws.amazon.com/application-discovery/latest/userguide/agentless-data-collected.html
upvoted 2 times
...
chico2023
1 year ago
Selected Answer: D
Answer: D The requirement: "the company wants to gather server metrics such as CPU details, RAM usage, operating system information, and running processes." From https://aws.amazon.com/application-discovery/faqs/: === AWS Application Discovery Service Discovery Agent Q: What data does the AWS Application Discovery Service Discovery Agent capture? The Discovery Agent captures system configuration, system performance, running processes, and details of the network connections between systems.
upvoted 1 times
chico2023
1 year ago
=== Agentless Collector Q: What data does the Agentless Collector capture? The Agentless Collector is delivered as an Open Virtual Appliance (OVA) package that can be deployed to a VMware host. The type of data collected will depend on the capabilities that you configure. If the credentials are provided to connect to vCenter, the Agentless Collector will collect VM inventory, configuration, and performance history data such as CPU, memory, and disk usage. If credentials are provided to connect to databases such as Oracle, SQL Server, MySQL, or PostgreSQL, the Agentless Collector will collect version, edition, and schema data. Server and database information is uploaded to the Application Discovery Service data store. Database information can be sent to AWS DMS Fleet Advisor for analysis.
upvoted 1 times
...
...
CuteRunRun
1 year ago
Selected Answer: D
I prefer D
upvoted 1 times
...
ggrodskiy
1 year ago
Correct A. D uses agent-based discovery, which requires installing an agent on each on-premises server. This can be cumbersome and intrusive for a large number of servers. It also does not explain how to use AWS Glue to perform an ETL job against the data.
upvoted 1 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: D
it's a D
upvoted 1 times
...
Maria2023
1 year, 2 months ago
Selected Answer: D
Initially, I went for A but the Discovery Connector only seems to collect information from the hypervisor, which excludes memory usage, processes etc. So I end up with D. Note to myself and a reminder to everyone - read the questions carefully, this is not associate exam.
upvoted 5 times
...
bcx
1 year, 2 months ago
Selected Answer: A
The key is the VMWare environment, for that the obvious solution is A. IMHO.
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: D
D is the answer because agentless cant grab everything
upvoted 2 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: D
A is wrong.. because Agentless can’t collect processes .. only CPU/RAM and disk IO
upvoted 5 times
...
Ajani
1 year, 5 months ago
If you have virtual machines (VMs) that are running in the VMware vCenter environment, you can use the Agentless Collector to collect system information without having to install an agent on each VM. Instead, you load this on-premises appliance into vCenter and allow it to discover all of its hosts and VMs. Agentless Collector captures system performance information and resource utilization for each VM running in the vCenter, regardless of what operating system is in use. However, it cannot “look inside” each of the VMs, and as such, cannot figure out what processes are running on each VM nor what network connections exist.
upvoted 1 times
Ajani
1 year, 5 months ago
Going with D; Agentless discovery Connector does not gather process information; "THE" on premises HOSTs(physical servers?) will be running on esxi server. You can deploy Discovery agent on Server(VM) . I might be overthinking it.
upvoted 2 times
...
...
sambb
1 year, 5 months ago
Selected Answer: D
With the agentless collector you cannot get running processes on the VMs, and you cannot export the data to CSV or to Athena for further querying
upvoted 2 times
...
God_Is_Love
1 year, 6 months ago
Even though question does not ask for least operational effort, performance, HA etc, the solution needs to be thinking those in mind. deploying on each server is not practically good solution. So D cannot be answer. Instead, an appliance which does this discovery job is good which is right there in A. Moreover A is exclusively for VMWare use case. I choose A
upvoted 2 times
...
monkeyfish
1 year, 6 months ago
Selected Answer: A
Answer is A. The AWS Agentless Discovery Connector is used when performing migration of servers in vmware clusters. S3 Select can be used to query. AWS SA's would only recommend installing the agent on each on-prem server for physical hosts, not vmware server.
upvoted 1 times
c73bf38
1 year, 6 months ago
S3 Select supports querying one file at a time. With Amazon Athena, you can perform SQL against any number of objects, or even entire bucket paths.
upvoted 2 times
...
...
pravi1
1 year, 7 months ago
D will be correct in my opinion.
upvoted 3 times
...
silkroad78
1 year, 7 months ago
D Since Agentless Collector can't collect process https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html
upvoted 3 times
masetromain
1 year, 7 months ago
You are correct, AWS Agentless Discovery does not collect information about processes running on the servers. It primarily focuses on gathering information about the server's hardware, operating system, and network configuration. It is mainly used to discover and inventory servers, but it doesn't provide the same level of detailed metrics as the AWS Application Discovery Agent. The AWS Application Discovery Agent is the best option if the company wants to gather information about running processes on the servers, as it can provide more detailed metrics than Agentless Discovery.
upvoted 1 times
...
...
masetromain
1 year, 7 months ago
Selected Answer: A
The correct solution is A. Deploy and configure the AWS Agentless Discovery Connector virtual appliance on the on-premises hosts. Configure Data Exploration in AWS Migration Hub. Use AWS Glue to perform an ETL job against the data. Query the data by using Amazon S3 Select. This solution allows the company to gather detailed server metrics from the on-premises hosts by deploying the Agentless Discovery Connector virtual appliance. The data can then be imported into AWS Migration Hub for further analysis. The company can then use AWS Glue to perform an ETL job on the data and query it using Amazon S3 Select for further analysis.
upvoted 3 times
...
Question #47 Topic 1

A company is building a serverless application that runs on an AWS Lambda function that is attached to a VPC. The company needs to integrate the application with a new service from an external provider. The external provider supports only requests that come from public IPv4 addresses that are in an allow list.

The company must provide a single public IP address to the external provider before the application can start using the new service.

Which solution will give the application the ability to access the new service?

  • A. Deploy a NAT gateway. Associate an Elastic IP address with the NAT gateway. Configure the VPC to use the NAT gateway.
  • B. Deploy an egress-only internet gateway. Associate an Elastic IP address with the egress-only internet gateway. Configure the elastic network interface on the Lambda function to use the egress-only internet gateway.
  • C. Deploy an internet gateway. Associate an Elastic IP address with the internet gateway. Configure the Lambda function to use the internet gateway.
  • D. Deploy an internet gateway. Associate an Elastic IP address with the internet gateway. Configure the default route in the public VPC route table to use the internet gateway.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
A (92%)
7%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: A
A. Deploy a NAT gateway. Associate an Elastic IP address with the NAT gateway. Configure the VPC to use the NAT gateway. This solution will give the Lambda function access to the internet by routing its outbound traffic through the NAT gateway, which has a public Elastic IP address. This will allow the external provider to whitelist the single public IP address associated with the NAT gateway, and enable the application to access the new service.
upvoted 30 times
Jacky_exam
1 year, 4 months ago
Options A are not appropriate solutions because they involve deploying a NAT gateway or an egress-only internet gateway, which are used for different purposes, such as allowing resources in a private subnet to access the internet while using a static public IP address. These options will not provide the Lambda function with a single public IP address to be used for external requests.
upvoted 5 times
ninomfr64
7 months, 4 weeks ago
The question includes "The external provider supports only requests that come from public IPv4 addresses that are in an allow list" this imply the Lambda needs to call the external provider
upvoted 1 times
...
...
JMAN1
8 months ago
Big Thank to you. masetromain.
upvoted 2 times
...
...
vvahe
Highly Voted 1 year, 5 months ago
A https://docs.aws.amazon.com/lambda/latest/operatorguide/networking-vpc.html "By default, Lambda functions have access to the public internet. This is not the case after they have been configured with access to one of your VPCs. If you continue to need access to resources on the internet, set up a NAT instance or Amazon NAT Gateway. Alternatively, you can also use VPC endpoints to enable private communications between your VPC and supported AWS services."
upvoted 8 times
...
subbupro
Most Recent 2 days, 6 hours ago
A is correct, NAT not only provides the internet outbound , but also provides single public IP address, So Selected Answer: A
upvoted 1 times
...
Jason666888
3 weeks, 1 day ago
THE ANSWER HAS TO BE A!!!! For B: Wrong. Egress only internet gateway is for IPV6, not for IPV4 For C&D: Internet gateway is for both inbound and outbount traffic. In our case we only need outbound traffic, so it has to be NAT Gateway.
upvoted 1 times
...
Helpnosense
2 months, 1 week ago
Selected Answer: D
NAT gateway doesn't allow inbound traffic flow into service behind NAT gateway. ALB or internet gateway can. However internet gateway can't be attached to lambda service directly. I vote D as correct answer.
upvoted 2 times
...
kz407
5 months, 1 week ago
Selected Answer: A
Option A will be the only solution that matches the given requirements. The problem with any solution that involves IGw is that IGw DOES NOT perform NAT. In fact, it does not alter the source IP field at all, meaning that we don't really have a mechanism of having a static public IP address set to the outbound traffic, while ensuring security. So, the only practical solution is to go with the NAT option.
upvoted 2 times
...
gofavad926
5 months, 1 week ago
Selected Answer: A
A, deploy nat gateway and associate an elastic ip
upvoted 1 times
...
Dgix
5 months, 3 weeks ago
Can an admin please take a look at _all_ the "correct answers" in this exam? They really cannot be trusted and reduce the usefulness of ExamTopics altogether. As things are, you should always just disregard the correct answer as it so often is insane. The correct answer is of course A.
upvoted 3 times
...
Vsos_in29
6 months ago
A is correct option, Other approach to enable internet access https://www.linkedin.com/pulse/aws-lambda-accessing-private-vpc-resources-internet-without-vokhmin-pyxbe/
upvoted 1 times
...
8608f25
6 months, 2 weeks ago
Selected Answer: A
The solution that enables the Lambda function in a VPC to access an external service that requires requests to come from a specific public IPv4 address, and to provide a single public IP address for allow listing, is: * Option A is correct because a NAT (Network Address Translation) gateway allows instances or AWS Lambda functions in a private subnet of a VPC to initiate outbound traffic to the internet (or external services) while preventing unsolicited inbound traffic from the internet. By associating an Elastic IP address with the NAT gateway, all outbound traffic from the Lambda function routed through the NAT gateway will appear to come from this single public IP address, which can be provided to the external provider for allow listing.
upvoted 2 times
8608f25
6 months, 2 weeks ago
It is not option C because, Option C describes deploying an internet gateway and associating an Elastic IP address with it. However, Lambda functions cannot be directly associated with Elastic IP addresses, and internet gateways are used to route traffic between a VPC and the internet, not to provide a static public IP address for outbound traffic.
upvoted 3 times
...
...
ninomfr64
7 months, 4 weeks ago
Selected Answer: A
Not B. egress-only internet gateway is IPv6 only, the question is about IPv6 Not C. you cannot associated Elastic IP to IGW also Lambda deployed in VPC cannot egress to internet via IGW, you need a NAT Gateway / NAT Instance Not D. same as C. A is the right solution (even if it is not well explained in my opinion)
upvoted 1 times
...
cgsoft
8 months, 2 weeks ago
Selected Answer: A
As per https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html, "To access private resources, connect your function to private subnets. If your function needs internet access, use network address translation (NAT). Connecting a function to a public subnet doesn't give it internet access or a public IP address."
upvoted 1 times
...
enk
9 months ago
Selected Answer: A
Just to clarify...If the Lambda function is already attached to a VPC, it's implied that it's in a private subnet since Lambda functions can't be directly placed in public subnets. So C and D are out.
upvoted 2 times
...
Pupu86
9 months, 3 weeks ago
Selected Answer: A
Option B is definitely out as egress-only internet gateway is applicable solely for IPv6 traffic.
upvoted 2 times
...
whenthan
10 months, 1 week ago
Selected Answer: A
internet gateway - cant assign elastic IP to internet gateway
upvoted 1 times
...
TWOCATS
12 months ago
Selected Answer: A
Option B is fundamentally wrong as Egress-only internet gateway only supports IPV6, which is basically the IPV6 equivalent of NAT gateway. Please check document [1] Enable outbound IPv6 traffic using an egress-only internet gateway - https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html
upvoted 1 times
...
vjp_training
1 year ago
Selected Answer: A
A is the best solution https://repost.aws/knowledge-center/internet-access-lambda-function
upvoted 1 times
...
Russs99
1 year ago
Selected Answer: B
considering all these points, the best answer is B A NAT gateway allows private subnets in a VPC to access the internet by providing them a public IP address. However, the Lambda function in this case is already in a public subnet, so a NAT gateway is not needed. A NAT gateway only allows outbound internet access from the private subnets. It does not provide a stable public IP address that can be whitelistened by the external provider. An internet gateway allows bi-directional internet access, which exposes the Lambda function and VPC to unsolicited inbound traffic from the internet. This is more access than what is required. The requirement is to provide the Lambda function with outbound internet access only, and provide the external provider with a single public IP address to whitelist. An egress-only internet gateway satisfies these requirements exactly. It allows outbound access only, and an Elastic IP can be associated with it to provide a stable whitelistable IP address.
upvoted 1 times
...
b3llman
1 year ago
Selected Answer: A
IGW allows instances with public IPs to access the internet. NGW allows instances with no public IPs to access the internet. Since the lambda function does not have a public IP and it is in a private subnet, we need a NGW with connectivity type of "public" to access the internet and NGW has a public static IP. IGW by itself does not work for this case.
upvoted 4 times
...
chico2023
1 year ago
Selected Answer: A
This post explains way better than I could: https://matthewleak.medium.com/aws-lambda-functions-with-a-static-ip-89a3ada0b471
upvoted 2 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: A
it's an A
upvoted 1 times
...
SmileyCloud
1 year, 1 month ago
Selected Answer: A
A - step by step here. https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html The elastic IP is attached to the NAT not IGW.
upvoted 4 times
kz407
5 months, 1 week ago
But, I don't see how the option A outlines the solution mentioned in the URL given.
upvoted 1 times
...
...
ailves
1 year, 2 months ago
Selected Answer: A
If we deploy Lambda в Public Subnet, Lambda will get IP address from Random range
upvoted 1 times
...
easytoo
1 year, 2 months ago
d-d-dd-d-d-dd-
upvoted 1 times
easytoo
1 year, 1 month ago
Updated my answer to A.
upvoted 2 times
...
...
mKrishna
1 year, 3 months ago
Ans: A Step-by-step instruction at https://africanpearl.hashnode.dev/vpc-network-public-and-private-subnets-grant-subnets-access-to-the-internet-aws
upvoted 1 times
...
aca1
1 year, 3 months ago
Selected Answer: A
No doubt about A. A Lambda function in VPC need a NAT Gateway to access internet, it can not use the Internet Gateway: https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html "Note To access private resources, connect your function to private subnets. If your function needs internet access, use network address translation (NAT). Connecting a function to a public subnet doesn't give it internet access or a public IP address."
upvoted 2 times
...
rbm2023
1 year, 3 months ago
Selected Answer: A
D and C are not ideal choices because involves moving the lambda to a public subnet since it mentions using an internet gateway. A would make more sense
upvoted 1 times
...
RaghavendraPrakash
1 year, 4 months ago
C. egress internet gateway is for IPv6 traffic. NAT GW still needs internet GW for internet connectivity, half solution.
upvoted 3 times
...
Jacky_exam
1 year, 4 months ago
Selected Answer: D
Option D is the correct solution. In order to provide the Lambda function with a single public IP address, an internet gateway must be deployed and associated with an Elastic IP address. The Elastic IP address can then be provided to the external provider for use in the allow list.
upvoted 3 times
Helpnosense
2 months, 1 week ago
Agree. NAT gateway doesn't allow inbound traffic flow into service behind NAT gateway. ALB or internet gateway can. However internet gateway can't be attached to lambda service directly. I vote D as correct answer.
upvoted 1 times
...
Jay_2pt0_1
1 year, 3 months ago
Everyone voted A, but I think you are right. I need to research this one a bit more, though.
upvoted 1 times
...
...
mfsec
1 year, 5 months ago
Selected Answer: A
Deploy a NAT gateway. Associate an Elastic IP address with the NAT gateway.
upvoted 2 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: A
NAT gateway is needed✅
upvoted 2 times
...
macc183
1 year, 5 months ago
why D is incorrect? I guess IGW also has public IP address?
upvoted 1 times
nexus2020
1 year, 4 months ago
also D has "Configure the default route", which will protential break all existing flows.
upvoted 1 times
...
doto
1 year, 5 months ago
IGW cannot have an EIP
upvoted 3 times
...
...
Ajani
1 year, 5 months ago
Selected Answer: A
Easy "A". B is wrong; Egress is a VPC component that allows outbound communication over IPv6 . C and D are wrong
upvoted 2 times
...
c73bf38
1 year, 6 months ago
Selected Answer: A
A. Deploying a NAT gateway is the best solution for this scenario. Since the external provider supports only public IPv4 addresses, the Lambda function can be configured with a private IP address in the VPC. A NAT gateway is used to provide a public IP address to the Lambda function when it accesses the external provider's service. This allows the Lambda function to access the new service while also securing it within the VPC
upvoted 2 times
...
zozza2023
1 year, 6 months ago
Selected Answer: A
option A
upvoted 3 times
...
MasterP007
1 year, 7 months ago
Option -B is incorrect, cause that's more for IPv6 use-case.
upvoted 1 times
...
Question #48 Topic 1

A solutions architect has developed a web application that uses an Amazon API Gateway Regional endpoint and an AWS Lambda function. The consumers of the web application are all close to the AWS Region where the application will be deployed. The Lambda function only queries an Amazon Aurora MySQL database. The solutions architect has configured the database to have three read replicas.

During testing, the application does not meet performance requirements. Under high load, the application opens a large number of database connections. The solutions architect must improve the application’s performance.

Which actions should the solutions architect take to meet these requirements? (Choose two.)

  • A. Use the cluster endpoint of the Aurora database.
  • B. Use RDS Proxy to set up a connection pool to the reader endpoint of the Aurora database.
  • C. Use the Lambda Provisioned Concurrency feature.
  • D. Move the code for opening the database connection in the Lambda function outside of the event handler.
  • E. Change the API Gateway endpoint to an edge-optimized endpoint.
Reveal Solution Hide Solution

Correct Answer: BD 🗳️

Community vote distribution
BD (98%)
2%

masetromain
Highly Voted 1 year, 7 months ago
Selected Answer: BD
The correct answer is B and D. B. Using RDS Proxy to set up a connection pool to the reader endpoint of the Aurora database can help improve the performance of the application by reducing the number of connections opened to the database. RDS Proxy manages the connection pool and routes incoming connections to the available read replicas, which can help with connection management and reduce the number of connections that need to be opened and closed. D. Moving the code for opening the database connection in the Lambda function outside of the event handler can help to improve the performance of the application by allowing the database connection to be reused across multiple requests. This avoids the need to open and close a new connection for each request, which can be time-consuming and resource-intensive.
upvoted 43 times
masetromain
1 year, 7 months ago
A. Using the cluster endpoint of the Aurora database instead of the reader endpoint would not help improve performance in this case, because the solution architect is already using read replicas to offload read traffic from the primary instance. C. Using the Lambda Provisioned Concurrency feature would not help improve performance in this case, as the problem is related to the number of connections to the database, not the number of instances running the Lambda function. E. Changing the API Gateway endpoint to an edge-optimized endpoint would not help improve performance in this case, as the problem is related to the number of connections to the database, not the location of the API Gateway endpoint.
upvoted 11 times
...
...
Malcnorth59
Most Recent 3 months ago
Selected Answer: BD
The issue is with the number of database connections, thee are the only two changes that would impact the number of concurrent DB connections.
upvoted 1 times
...
gofavad926
5 months, 1 week ago
Selected Answer: BD
B and D
upvoted 1 times
...
totten
10 months, 4 weeks ago
Selected Answer: BD
B. Use RDS Proxy to set up a connection pool to the reader endpoint of the Aurora database. RDS Proxy helps manage and efficiently pool database connections, reducing the number of database connections required by the application. It helps improve performance and reduces the load on the database. D. Move the code for opening the database connection in the Lambda function outside of the event handler. By reusing database connections, you can reduce the overhead of opening and closing connections for each Lambda invocation. You can use the Lambda execution context to keep the database connection open and reuse it across multiple requests within the same execution context.
upvoted 3 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: BD
BD for sure
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: BD
RDS proxy + Lambda function
upvoted 4 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: BD
RDX proxy & connecting outside the handler method is up to 5 times faster than connecting inside.
upvoted 3 times
...
kiran15789
1 year, 5 months ago
Selected Answer: BD
he Lambda function only queries an Amazon Aurora MySQL database- so i would reject option C
upvoted 2 times
...
God_Is_Love
1 year, 6 months ago
This may be too logical answer :-) - Setting up RDS proxy will help connection pooling, So B is one answer. Now C vs D This question focuses on serverless solutions and best practices of lambda. and question hints that lambda only contains simple code.so lambda concurrency improvements may not be be the cause for performance issues detected while testing, and guess what - app is still in testing phase. so code might have a flaw can be reviewed and changed as per lambda best practices - https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html. I choose B and D
upvoted 3 times
...
moota
1 year, 6 months ago
Selected Answer: BD
According to ChatGPT, By reusing the same database connection across multiple invocations of the function, you can reduce the number of database connections that are opened and closed, which can help conserve resources and reduce the risk of running into database connection limits.
upvoted 2 times
...
Amac1979
1 year, 6 months ago
BD https://awstut.com/en/2022/04/30/connect-to-rds-outside-of-lambda-handler-method-to-improve-performance-en/
upvoted 4 times
...
masssa
1 year, 7 months ago
B/C lambda provisioned concurrency and RDS proxy are mentioned in same page. https://quintagroup.com/blog/aws-lambda-provisioned-concurrency
upvoted 1 times
...
Untamables
1 year, 7 months ago
Selected Answer: BC
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.howitworks.html https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html
upvoted 1 times
...
jhonivy
1 year, 7 months ago
B/C Provisioned Concurrency needed: https://www.reddit.com/r/aws/comments/gcwtqt/lambda_provisioned_concurrency_with_aurora/ With connection Pool, no to worry D
upvoted 1 times
...
Question #49 Topic 1

A company is planning to host a web application on AWS and wants to load balance the traffic across a group of Amazon EC2 instances. One of the security requirements is to enable end-to-end encryption in transit between the client and the web server.

Which solution will meet this requirement?

  • A. Place the EC2 instances behind an Application Load Balancer (ALB). Provision an SSL certificate using AWS Certificate Manager (ACM), and associate the SSL certificate with the ALB. Export the SSL certificate and install it on each EC2 instance. Configure the ALB to listen on port 443 and to forward traffic to port 443 on the instances.
  • B. Associate the EC2 instances with a target group. Provision an SSL certificate using AWS Certificate Manager (ACM). Create an Amazon CloudFront distribution and configure it to use the SSL certificate. Set CloudFront to use the target group as the origin server.
  • C. Place the EC2 instances behind an Application Load Balancer (ALB) Provision an SSL certificate using AWS Certificate Manager (ACM), and associate the SSL certificate with the ALB. Provision a third-party SSL certificate and install it on each EC2 instance. Configure the ALB to listen on port 443 and to forward traffic to port 443 on the instances.
  • D. Place the EC2 instances behind a Network Load Balancer (NLB). Provision a third-party SSL certificate and install it on the NLB and on each EC2 instance. Configure the NLB to listen on port 443 and to forward traffic to port 443 on the instances.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (52%)
D (38%)
9%

pitakk
Highly Voted 1 year, 7 months ago
Selected Answer: C
Amazon-issued public certificates can’t be installed on an EC2 instance. To enable end-to-end encryption, you must use a third-party SSL certificate. https://aws.amazon.com/premiumsupport/knowledge-center/acm-ssl-certificate-ec2-elb/ so it's C or D. I choose C as it's ALB
upvoted 45 times
_Jassybanga_
6 months, 2 weeks ago
in C , the encryption will terminate at ALB so its not an end-2-end encryption , for e2e end encryption need NLB
upvoted 2 times
...
hobokabobo
1 year, 6 months ago
correct, but then you would use that ordered certificate for the alb as well. The other reason to order certificates is because some clients cannot verify ACM certificates which is not acceptable for a productive public service. Between ALB and EC2 a self signed certificate is sufficient as alb does no verification of the EC2's certificate at all.
upvoted 2 times
bjexamprep
4 months, 3 weeks ago
that means you are decrypting the data on ALB and encrypt it again to send it to EC2. Does that sound E2E?
upvoted 4 times
...
...
...
Untamables
Highly Voted 1 year, 7 months ago
Selected Answer: D
Vote D. If you need to pass encrypted traffic to targets without the load balancer decrypting it, you can create a Network Load Balancer or Classic Load Balancer with a TCP listener on port 443. https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
upvoted 35 times
hobokabobo
1 year, 6 months ago
coorect. but they want to upload the the certificate to the NLB for unknown reasons.
upvoted 5 times
...
Arnaud92
1 year, 5 months ago
You can use NLB with ACM cert on it. NLB can do TLS termination (https://aws.amazon.com/blogs/aws/new-tls-termination-for-network-load-balancers/) and re-encrypt to target
upvoted 2 times
...
lkyixoayffasdrlaqd
1 year, 6 months ago
how can this be true? Option D says to install on NLB. You say bypass the NLB. If you bypass the NLB why are you installing the cert?
upvoted 11 times
...
...
toma
Most Recent 1 month ago
it is D, C is more complex.
upvoted 1 times
...
higashikumi
3 months ago
Selected Answer: C
To achieve end-to-end encryption for a web application using AWS, place the EC2 instances behind an Application Load Balancer (ALB). Provision an SSL certificate using AWS Certificate Manager (ACM) and associate it with the ALB to handle HTTPS traffic from clients to the ALB. Additionally, install a third-party SSL certificate on each EC2 instance to ensure that traffic between the ALB and the instances is also encrypted. Configure the ALB to listen on port 443 and forward traffic to port 443 on the instances. This setup ensures that all data in transit is encrypted from the client through the ALB to the backend EC2 instances, meeting security requirements for end-to-end encryption while leveraging ACM for simplified certificate management   .
upvoted 1 times
...
Malcnorth59
3 months ago
Selected Answer: D
The key here is end-to-end, so that rules out ALB. Instead Use NLB with TLS termination which will pass the traffic on encrypted. https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html#:~:text=The%20load%20balancer%20passes%20the,combination%20of%20protocols%20and%20ciphers.
upvoted 1 times
...
titi_r
3 months, 1 week ago
Selected Answer: D
“To enable END-TO-END encryption, you must procure an SSL certificate from a third-party vendor. You can then install the certificate on the EC2 instance and also associate the SAME certificate with the (network) Load Balancer by importing it into Amazon Certificate Manager.” https://www.youtube.com/watch?v=6Nz0RFfBqVE&t=44s TLS listeners for your Network Load Balancer "… if you need to pass encrypted traffic to the targets without the (network) load balancer decrypting it, create a TCP listener on port 443 instead of creating a TLS listener." https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html P.S. The answer is misleading because it says to install the certificate on the NLB; read it as “import it to ACM and associate it with the NLB.
upvoted 2 times
...
vip2
3 months, 1 week ago
Selected Answer: C
C is correct because ALB+Self-signed Certification NLB+Public Certification
upvoted 1 times
...
EmmanuelPR
5 months, 1 week ago
Selected Answer: A. Public Certificates: You can request Amazon-issued public certificates from ACM. ACM manages the renewal and deployment of public certificates that are used with ACM-integrated services, including Amazon CloudFront, Elastic Load Balancing, and Amazon API Gateway. https://aws.amazon.com/es/certificate-manager/faqs/
upvoted 2 times
...
gofavad926
5 months, 1 week ago
Selected Answer: C
C: use ACM in the ALB and third-party SSL certificate in the EC2 instances
upvoted 1 times
...
Dgix
5 months, 2 weeks ago
Selected Answer: D
The only solution that encrypts all the way is D.
upvoted 1 times
...
bjexamprep
5 months, 3 weeks ago
Selected Answer: D
The different opinions are mainly on C or D. Both C and D are good for end to end encryption “in transit”. But actually the data is unencrypted on the ALB, and then encrypted again. Technically speaking, the ALB should be considered as part of the “transit”. This is a flaw of C. And it is complicated to introduce another certificate. The flaws of answer D are: - mentioning installing SSL certificate to the NLB, which is not necessary. - It doesn’t mention which listener is used. TLS listener does SSL termination while TCP listener does not.
upvoted 1 times
...
marszalekm
6 months ago
https://aws.amazon.com/blogs/aws/mutual-authentication-for-application-load-balancer-to-reliably-verify-certificate-based-client-identities/
upvoted 1 times
...
ninomfr64
7 months, 4 weeks ago
Selected Answer: D
Not A. You cannot export ACM certificate https://repost.aws/knowledge-center/configure-acm-certificates-ec2 Not B. You cannot set CloudFront to use the target group as the origin server, you need to set the ELB the target group is assigned Not C. This terminates SSL in the load balancer and then re-encrypt, while the question asks for end-to-end encryption in transit between the client and the web server. https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html NLB configured with TCP listener on port 443 is the right option. This answer is misleading as it mention to install the SSL certificate on the NLB, this is not needed if you do not use a TLS listener.
upvoted 2 times
...
subbupro
8 months, 3 weeks ago
D would be fine transport level security. No need any encrypt and decrypt.
upvoted 1 times
...
sonyaws
9 months ago
Selected Answer: D
Application Load Balancers do not support mutual TLS authentication (mTLS). For mTLS support, create a TCP listener using a Network Load Balancer or a Classic Load Balancer and implement mTLS on the target. Ref: 4th paragraph of https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
upvoted 1 times
...
aokaddaoc
9 months, 1 week ago
Selected Answer: D
Note that if you need to pass encrypted traffic to the targets without the load balancer decrypting it, create a TCP listener on port 443 instead of creating a TLS listener. The load balancer passes the request to the target as is, without decrypting it. https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html Must be D. C will decrypt request once for sure
upvoted 3 times
...
heatblur
9 months, 1 week ago
Selected Answer: C
C is the best choice. Similar to Option A, but with the use of a third-party SSL certificate installed on each EC2 instance. This approach would indeed ensure end-to-end encryption, with the ALB handling the SSL termination from the client and the third-party SSL certificate securing the connection from the ALB to the EC2 instances. This option is technically feasible and meets the requirement of end-to-end encryption.
upvoted 2 times
...
PAUGURU
9 months, 1 week ago
This question is clearly wrong and no option is correct. In my world, end-to-end means there is no decryption from source to target (server). If you decrypt it on an NLB or ALB and then re-encrypt it, Amazon could read the traffic in clear if they want to, so the encryption is NEVER end-to-end with these choices.
upvoted 1 times
PAUGURU
8 months, 3 weeks ago
Change to D, the only one who lets encrypted traffic pass through; https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
upvoted 1 times
...
...
severlight
9 months, 2 weeks ago
Selected Answer: C
C and D will work, but for web applications, C is preferred.
upvoted 1 times
...
Russs99
9 months, 3 weeks ago
Selected Answer: A
I originally picked C, but, you cannot use a third-party SSL certificate with an Application Load Balancer (ALB). An ALB only supports SSL certificates that are provisioned by AWS Certificate Manager (ACM) or imported into ACM. Remember this for the exam
upvoted 1 times
rainrafa
7 months ago
While you're doing that, also remember you can't export ACM certs. So definitely don't go for A.
upvoted 3 times
...
...
SuperDuperPooperScooper
9 months, 3 weeks ago
Selected Answer: D
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
upvoted 2 times
...
Pupu86
9 months, 3 weeks ago
Selected Answer: C
Both NLB and ALB can handle SSL/TLS offloading/termination but I would choose C cause the crux here is pointing towards web traffic (HTTP) and ALB handles web traffic while NLB handles TCP traffic.
upvoted 1 times
...
dpatra
10 months, 2 weeks ago
The correct answer is D. Even though you attach the certificate to the NLB as well it does not mean it has to use it. It would give the flexibility to either SSL Passthrough or SSL Termination at NLB. It is the only option that would enable end to end encryption since the ALB does not support SSL Passthrough and SSL termination happens at the load balancer level therefore stops the end to end encryption.
upvoted 1 times
...
longns
11 months ago
Selected Answer: D
No way for ALB pass encrypted traffic to targets without the load balancer decrypting it You must create Network Load Balancer or Classic Load Balancer. Again check anwser by your self dont just read commend :) include me https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
upvoted 2 times
...
Greyeye
1 year ago
D is the answer. for NLB The load balancer passes the request to the target as is, without decrypting it. see https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html A , you cannot export ACM cert https://repost.aws/knowledge-center/acm-export-certificate B and C both end point will decrypt the traffic and proxy to origin/target. to meet end-to-end encryption D is only one.
upvoted 1 times
...
xplusfb
1 year ago
Selected Answer: C
Due to Question asking end-to-end encryption so we did before like a question same scenerio and we have a 3rd party ssl into EC2 servers also using CloudFront ACM. im going too C fellas. thank you
upvoted 2 times
...
chico2023
1 year ago
Selected Answer: C
Answer: C Same reasons as most have put, plus this: https://repost.aws/questions/QUIo7PWvZ3T6aFYCByhZ5f0A/load-certificate-on-alb-and-ec2
upvoted 1 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: C
it's a C
upvoted 2 times
...
SmileyCloud
1 year, 1 month ago
Selected Answer: D
C is valid, see here. https://faun.pub/end-to-end-ssl-encryption-with-aws-application-load-balancer-b43db918bd9e But, D is better, less overhead and no fake certs.
upvoted 1 times
...
Jonalb
1 year, 1 month ago
Selected Answer: C
its a C
upvoted 1 times
...
[Removed]
1 year, 2 months ago
Selected Answer: D
D: An NLB is needed provide the complete end-to-end encryption the question calls for, the other answers all decrypt the traffic in the middle somewhere. The only confounding factor in the wording is it talks about "installing the certificate on the NLB" which isn't required for end-to-end, you'd just use pass-through TCP on port 443. You *can* install a certificate on an NLB if you want to use a TLS listener (https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html) but that would a) decrypt in the middle, and b) shouldn't be required here.
upvoted 1 times
...
ailves
1 year, 2 months ago
Selected Answer: C
I voted to C, as we need to have end-to-end encryption, and so we have to install third party certificates on EC2 instances (not ACM), and we have to use ALB so as HTTP traffic
upvoted 1 times
...
easytoo
1 year, 2 months ago
a-a-a-a-a-a-a-a-a-a-a-
upvoted 1 times
easytoo
1 year, 2 months ago
changed it to c-c-c-c-c-c-c-c
upvoted 1 times
...
...
Asds
1 year, 2 months ago
I will go C, but I was hesitating with D. So what convinced me: no need to install any certs at nlb's as you're doing passthrough, so no decryption AT that moment. Drop D hence, and C is my choice
upvoted 2 times
...
papawed345
1 year, 3 months ago
Selected Answer: D
The only possible answer is an NLB. The ALB will always decrypt in the middle.
upvoted 1 times
...
emiioan
1 year, 3 months ago
Selected Answer: C
C is correct. Although D works, the fact that it states "install it on NLB" is wrong as you can ony associate/add it to the listener but there is no install option. ALB with public ACM cert fw to target group with seflf signed cert listening on port 443 is correct (see the implementation steps here).
upvoted 6 times
Jesuisleon
1 year, 3 months ago
I agree with you. NLB can work by its tcp endpoint to forward encrypted connection through it but in this case there is no need to install cert on NLB. so C is better.
upvoted 1 times
...
...
meggie
1 year, 3 months ago
vote for D. NLB works at layer 7 and won't decrypt traffic. However, ALB works at layer 4.
upvoted 1 times
Sarutobi
1 year, 3 months ago
I think you have that backward; Network actually works at L3 (Network Layer of OSI model), and L4 (Transport UDP/TCP). When introduced, the NLB could not do TLS work, although it is now. ALB works in L7 or the application layer of the OSI model. ALB is only an HTTP proxy, so it only supports HTTP traffic; you won't be able to use it for UDP traffic or any other TCP.
upvoted 4 times
...
...
F_Eldin
1 year, 3 months ago
Selected Answer: A
you can also export private certificates for use on EC2 instances, on ECS containers, or anywhere. https://aws.amazon.com/certificate-manager/faqs/
upvoted 3 times
...
Maria2023
1 year, 4 months ago
Selected Answer: C
Here is a similar scenario but for beanstalk https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-endtoend.html
upvoted 1 times
...
OCHT
1 year, 4 months ago
Selected Answer: C
They can place the EC2 instances behind an Application Load Balancer (ALB), provision an SSL certificate using AWS Certificate Manager (ACM), and associate the SSL certificate with the ALB. They can also provision a third-party SSL certificate and install it on each EC2 instance. Finally, they can configure the ALB to listen on port 443 and to forward traffic to port 443 on the instances. This will ensure that traffic is encrypted both between the client and the ALB, and between the ALB and the EC2 instances.
upvoted 4 times
...
Jacky_exam
1 year, 4 months ago
Selected Answer: A
A. Place the EC2 instances behind an Application Load Balancer (ALB). Provision an SSL certificate using AWS Certificate Manager (ACM), and associate the SSL certificate with the ALB. Export the SSL certificate and install it on each EC2 instance. Configure the ALB to listen on port 443 and to forward traffic to port 443 on the instances. This solution is the recommended approach for enabling SSL/TLS encryption between clients and web servers on AWS. It uses the Application Load Balancer (ALB) to terminate SSL/TLS traffic and then forwards the traffic to the EC2 instances over an encrypted connection. Provisioning an SSL certificate using AWS Certificate Manager (ACM) provides a free, trusted SSL/TLS certificate that can be easily managed and automatically renewed. The SSL certificate is associated with the ALB and can be exported and installed on each EC2 instance for end-to-end encryption.
upvoted 5 times
...
takecoffe
1 year, 4 months ago
Selected Answer: D
OF COURSE network lb with third party certificate
upvoted 2 times
...
Amac1979
1 year, 5 months ago
Selected Answer: C
https://repost.aws/knowledge-center/acm-ssl-certificate-ec2-elb
upvoted 1 times
...
mfsec
1 year, 5 months ago
Selected Answer: C
C is my vote
upvoted 1 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: C
C is the best solution and it does actually work (you can google it) Answer D is wrong .. why you would import the certificate on the NLB stage if it’s end to end? The host (ec2) should handle the certificate..
upvoted 1 times
...
aqiao
1 year, 5 months ago
Selected Answer: C
Amazon-issued public certificates can’t be installed on an EC2 instance. To enable end-to-end encryption, you must use a third-party SSL certificate https://aws.amazon.com/premiumsupport/knowledge-center/acm-ssl-certificate-ec2-elb/?nc1=h_ls
upvoted 1 times
...
zejou1
1 year, 5 months ago
Selected Answer: D
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html Says to configure NLB to 'listen' and forward for end to end under ALB it points you to NLB " If you must ensure that the targets decrypt HTTPS traffic instead of the load balancer, you can create a Network Load Balancer with a TCP listener on port 443." https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html
upvoted 1 times
...
cherep87
1 year, 5 months ago
Vote for D option C will unencrypt the traffic on ALB, and goes against end-to-end encryption from server to client
upvoted 3 times
...
vherman
1 year, 5 months ago
Selected Answer: C
C is correct. AppMesh can be used here?
upvoted 1 times
...
kiran15789
1 year, 5 months ago
Selected Answer: C
AWS Certificate Manager (ACM) SSL certificates cannot be directly applied to EC2 instances . - I will go with C on this one
upvoted 1 times
...
rtgfdv3
1 year, 5 months ago
Selected Answer: D
Not idea why AWS says that these solutions guarantee "complete end-to-end encryption in transit" https://aws.amazon.com/blogs/aws/new-tls-termination-for-network-load-balancers/ "After choosing the certificate and the policy, I click Next:Configure Routing. I can choose the communication protocol (TCP or TLS) that will be used between my NLB and my targets. If I choose TLS, communication is encrypted; this allows you to make use of complete end-to-end encryption in transit:"
upvoted 1 times
...
Ajani
1 year, 5 months ago
Selected Answer: D
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/use_cases/nlb_tls_termination/#:~:text=AWS%20introduced%20TLS%20termination%20for,access%20to%20the%20private%20key.
upvoted 1 times
Ajani
1 year, 5 months ago
"After choosing the certificate and the policy, I click Next:Configure Routing. I can choose the communication protocol (TCP or TLS) that will be used between my NLB and my targets. If I choose TLS, communication is encrypted; this allows you to make use of complete end-to-end encryption in transit:" I did not know TLS termination was possible with NLB's https://aws.amazon.com/blogs/aws/new-tls-termination-for-network-load-balancers/
upvoted 2 times
...
...
hobokabobo
1 year, 6 months ago
Retry neclecting end to end: It is imo not possible to export ACM keys. Therefore I think one cannot install an ACM certificate on EC2. This excludes A. Now we have three technically possible solutions. B) has no encryption at all between cloudfront and ec2. C) order a certificate from a third party to not deliver it to the client. D) NLB certificate support is limited, can't do strong encryption. From that it D and C are slightly better than B as they provide encryption between server and ALB/NLB - even though its not end to end.
upvoted 1 times
lkyixoayffasdrlaqd
1 year, 6 months ago
You cannot export ACM Keys? Who says that? You can if you are in the same account and region. "You can't export an ACM certificate from one AWS Region to another or from one AWS account to another. This is because the default AWS Key Management Service (AWS KMS) key used to encrypt the private key of the certificate is unique for each AWS Region and AWS account."
upvoted 1 times
sambb
1 year, 5 months ago
"You cannot use ACM to install a public certificate directly on your AWS based website or application. You must use one of the services integrated with ACM" In our case, we want to install the certificate on the EC2, which is not possible when it is stored in ACM. https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-install.html It is only possible in ACM PCA.
upvoted 2 times
hobokabobo
1 year, 4 months ago
Yep, that is the behavior I found. Up to now found no way to get hold of the key(download it).
upvoted 1 times
...
...
hobokabobo
1 year, 4 months ago
Uhm: you have a key and a certificate. That you can download the cert does not mean that you can download the key for the certificate. At least I never found any method to do so. There is a button to download the cert but no button to download the key. Or I was not able to find it.
upvoted 1 times
...
...
...
hobokabobo
1 year, 6 months ago
Selected Answer: D
The key in this question is *end to end* encryption between client and server. That means we are not allowed to offload encryption to a load balancer but instead need the load balancer to pass the encrypted traffic as is to the server. As we may not interfere in the encrypted traffic all benefits of an application load balancer are void. An NLB is the best choice. As a side note: C is ridiculous: order a certificate to not deliver it to clients? If one offloads one would use the same certificate for server and alb or use the ordered certificate on the alb and create a cheaper one for internal encryption between server and alb. You want the ordered certificate delivered to the clients.
upvoted 3 times
hobokabobo
1 year, 6 months ago
Actually, I was not able to read correctly. D also violates end to end encryption. (C is still ridiculous.)
upvoted 1 times
OnePunchExam
1 year, 4 months ago
You will be in for a shock when you start learning about mTLS and microservices.
upvoted 1 times
hobokabobo
1 year, 4 months ago
Actually: no. The straight oposite. Also with D they decrypt and reencrypt on the NLB for what reason ever. That they do *not* do mTLS is the problem. The answer is not mTLS.
upvoted 1 times
hobokabobo
1 year, 4 months ago
also AWS Loadbalancers do no verification at all. Because of that putting a cert on the NLB voids verification (Man in the middle) completely. mTLS: opposite - that would be fine.
upvoted 1 times
...
...
...
...
...
jaysparky
1 year, 6 months ago
Both C and D are correct
upvoted 2 times
...
spd
1 year, 6 months ago
Selected Answer: C
C 100% - ACM for ALb and Third Party SSL certificate for EC2
upvoted 1 times
...
rtgfdv3
1 year, 6 months ago
Selected Answer: D
the prob that i can see with C. is that [ do not guarantee ] end-to-end. you need to off load ssl in the ALB and then re_encrypt again, meaning that at some point (inside the ALB) you have the data in plain text.
upvoted 1 times
...
c73bf38
1 year, 6 months ago
Selected Answer: A
Answer: A Explanation: When using AWS Certificate Manager (ACM), you can provision SSL/TLS certificates that you can use with Amazon CloudFront to distribute traffic to EC2 instances or other resources. For encrypting data in transit between clients and web servers, you can place the EC2 instances behind an Application Load Balancer (ALB), provision an SSL/TLS certificate using ACM, and associate the SSL/TLS certificate with the ALB. Option B is incorrect because it doesn't provide end-to-end encryption between the client and the web server. Option C is incorrect because you don't need a third-party SSL/TLS certificate when you are using AWS Certificate Manager (ACM). Option D is incorrect because you can't install a third-party SSL/TLS certificate on a Network Load Balancer (NLB).
upvoted 2 times
c73bf38
1 year, 6 months ago
Moderator, don't approve, this is incorrect.
upvoted 1 times
...
c73bf38
1 year, 6 months ago
After reading this https://aws.amazon.com/premiumsupport/knowledge-center/acm-ssl-certificate-ec2-elb, I've changed to C.
upvoted 1 times
...
...
moota
1 year, 6 months ago
Selected Answer: C
According to ChatGPT, AWS Certificate Manager (ACM) SSL certificates cannot be directly applied to EC2 instances. ACM SSL certificates can only be used with AWS services like Elastic Load Balancing (ELB), CloudFront, API Gateway, and some other services. To use an ACM SSL certificate with an EC2 instance, you would need to place the instances behind an Elastic Load Balancer (ELB) or an Application Load Balancer (ALB) that terminates SSL traffic using the ACM certificate, and then forward the traffic to the instances. Alternatively, you can obtain SSL certificates from other sources (like a third-party certificate authority or Let's Encrypt) and install them directly on the EC2 instances.
upvoted 1 times
...
tinyflame
1 year, 6 months ago
Selected Answer: CD
Both C and D are correct
upvoted 1 times
...
jojom19980
1 year, 6 months ago
Selected Answer: C
You must also configure the instances in your environment to listen on the secure port and terminate HTTPS connections. The configuration varies per platform. See Configuring your application to terminate HTTPS connections at the instance for instructions. You can use a self-signed certificate for the EC2 instances without issue. link: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-endtoend.html
upvoted 2 times
...
Musk
1 year, 6 months ago
Selected Answer: C
I think I go for C, although depending on the definition of end-to-end (allows decryption in between?) might be D
upvoted 1 times
...
ccort
1 year, 7 months ago
Selected Answer: C
I think C only because ACM only allows to export a private certificate, not public ones, which I assume are the ones being used by the ALB
upvoted 1 times
...
bititan
1 year, 7 months ago
Selected Answer: C
A. this is about end 2 end encyrption between client and web server. if you do not install cert in webserver, then the transmission between ALB and web server remains unencrypted. cost is not a factor mentioned here only security is
upvoted 2 times
...
masetromain
1 year, 7 months ago
Selected Answer: A
The correct answer is option A. By placing the EC2 instances behind an Application Load Balancer (ALB) and provisioning an SSL certificate using AWS Certificate Manager (ACM), associating the SSL certificate with the ALB, and configuring the ALB to listen on port 443 and forward traffic to port 443 on the instances, it ensures that traffic is encrypted in transit between the client and the web server. This meets the requirement for end-to-end encryption. Option B is incorrect because it does not allow for end-to-end encryption in transit between the client and the web server. Option C is incorrect because it involves using two SSL certificates, one from AWS and one from a third-party, which would create complexity and increase costs. Option D is incorrect because it uses a Network Load Balancer (NLB) which does not support SSL termination and would not ensure end-to-end encryption in transit between the client and the web server.
upvoted 4 times
Asds
1 year, 2 months ago
Cant export ssl cert from ACM
upvoted 1 times
Asds
1 year, 2 months ago
https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html
upvoted 1 times
...
...
...
Question #50 Topic 1

A company wants to migrate its data analytics environment from on premises to AWS. The environment consists of two simple Node.js applications. One of the applications collects sensor data and loads it into a MySQL database. The other application aggregates the data into reports. When the aggregation jobs run, some of the load jobs fail to run correctly.

The company must resolve the data loading issue. The company also needs the migration to occur without interruptions or changes for the company’s customers.

What should a solutions architect do to meet these requirements?

  • A. Set up an Amazon Aurora MySQL database as a replication target for the on-premises database. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind a Network Load Balancer (NLB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced, disable the replication job and restart the Aurora Replica as the primary instance. Point the collector DNS record to the NLB.
  • B. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora. Move the aggregation jobs to run against the Aurora MySQL database. Set up collection endpoints behind an Application Load Balancer (ALB) as Amazon EC2 instances in an Auto Scaling group. When the databases are synced, point the collector DNS record to the ALDisable the AWS DMS sync task after the cutover from on premises to AWS.
  • C. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind an Application Load Balancer (ALB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced, point the collector DNS record to the ALB. Disable the AWS DMS sync task after the cutover from on premises to AWS.
  • D. Set up an Amazon Aurora MySQL database. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as an Amazon Kinesis data stream. Use Amazon Kinesis Data Firehose to replicate the data to the Aurora MySQL database. When the databases are synced, disable the replication job and restart the Aurora Replica as the primary instance. Point the collector DNS record to the Kinesis data stream.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (95%)
5%

OCHT
Highly Voted 1 year, 4 months ago
Selected Answer: C
Option A, B and D have some similarities with Option C but also have some key differences: Option A uses a Network Load Balancer (NLB) instead of an Application Load Balancer (ALB) and does not use AWS Database Migration Service (AWS DMS) for continuous data replication. Instead, it sets up the Aurora MySQL database as a replication target for the on-premises database. Option B does use AWS DMS for continuous data replication and sets up collection endpoints behind an ALB as Amazon EC2 instances in an Auto Scaling group. However, it does not create an Aurora Replica for the Aurora MySQL database or use Amazon RDS Proxy to write to the Aurora MySQL database. Option D does not use AWS DMS for continuous data replication or set up collection endpoints behind an ALB. Instead, it sets up collection endpoints as an Amazon Kinesis data stream and uses Amazon Kinesis Data Firehose to replicate the data to the Aurora MySQL database.
upvoted 16 times
...
ninomfr64
Most Recent 7 months, 4 weeks ago
Selected Answer: C
Not A. not clear how the on-premises database is replicated on the Aurora MySQL, also you cannot place Lambda behind NLB as BLB only supports private IPs, instances and ALB https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html Not B. this will keep executing the aggregation job and the load on the same database instance and this will not resolve loading issues Not D. using Kinesis Data Firehose to replicate the database is not recommended, the solution should involve DMS. also moving to Kinesis Data Stream for data load requires some changes on the customer side which is not part of the request. C is the right solution: use DMS to migrate on-premise database, move the aggregation job to the read replica, using Lambda (that supports node.js) behind ALB will not impact client side
upvoted 2 times
...
shaaam80
8 months, 3 weeks ago
Selected Answer: C
Answer C
upvoted 1 times
...
NikkyDicky
1 year, 1 month ago
Selected Answer: C
It's a c
upvoted 1 times
...
SkyZeroZx
1 year, 2 months ago
Selected Answer: C
Keyworks = DMS & RDS Proxy Then C
upvoted 2 times
...
leehjworking
1 year, 3 months ago
Selected Answer: C
AD: restart = interruption? B: ASG...Why?
upvoted 3 times
chikorita
1 year, 2 months ago
why ...oh...why?
upvoted 1 times
...
...
mfsec
1 year, 5 months ago
Selected Answer: C
ill go with C
upvoted 1 times
...
dev112233xx
1 year, 5 months ago
Selected Answer: C
C.. even though question didn’t mention the total time of each job. If the job takes more than 15m then Lambda can’t be used. Probably the solution with ASG and EC2 is better .. not sure!
upvoted 3 times
...
zejou1
1 year, 5 months ago
Selected Answer: C
ALB because you are pointing to to Lambda function, not a network address Look at AWS DMS feature https://aws.amazon.com/dms/features/ Main requirement - needs the migration to occur w/out interruptions or changes to the company's customers. C keeps it stupid simple w/ no service interruption
upvoted 1 times
...
vherman
1 year, 5 months ago
Could anybody explain why ALB? I'd go with API Gateway
upvoted 1 times
zejou1
1 year, 5 months ago
Application - you are using Lambda functions that will be sending api commands, you would use network when it is just about routing
upvoted 1 times
...
...
Sarutobi
1 year, 6 months ago
Selected Answer: C
I would say C.
upvoted 1 times
...
hobokabobo
1 year, 6 months ago
I have a feeling that none of the approaches will work. a) We have two sources that change the database: migration and new data coming in. In a relational database this results in inconsistent data. Constraints will not be fulfilled. b) until the database is fully synced the second database has inconsistent data. Some parts of relations and parts of entities are still missing. Constraints will not be fulfilled. None if the approaches addresses that aggregation tasks fail because of inconsistency of the data base.
upvoted 1 times
hobokabobo
1 year, 6 months ago
ACID principle: atomicity, consistency, isolation and durability. All solutions violate this basic principle of relational databases. https://en.wikipedia.org/wiki/ACID
upvoted 1 times
...
...
God_Is_Love
1 year, 6 months ago
Issue could be because of same db used for writing and reading heavily. solution to separate this into read replica only for reading. DMS for data migration to aws from onpremises.Writing app to DB and Reading app from DB for reports. Writing app needs RDSProxy and saves data.Reading app reads from replica. B is wrong because, Reading job (aggregation) needs to use replica which is mentioned in C. C is correct.
upvoted 2 times
...
Fatoch
1 year, 6 months ago
is it C or B? Same person answers two times two different answers
upvoted 1 times
...
zozza2023
1 year, 6 months ago
Selected Answer: C
C is corect
upvoted 3 times
...
masetromain
1 year, 7 months ago
Selected Answer: C
C. This option would meet the requirements of resolving the data loading issue and migrating without interruption or changes for the company's customers. By using AWS DMS for continuous data replication, the company can ensure that the data being migrated is up to date. By setting up an Aurora Replica and moving the aggregation jobs to run against it, the company can offload some of the read workload from the primary database and reduce the risk of issues with the load jobs. By using AWS Lambda functions behind an ALB and Amazon RDS Proxy to write to the Aurora MySQL database, the company can add an extra layer of security and scalability to the data collection process. Finally, by pointing the collector DNS record to the ALB after the databases are synced and disabling the AWS DMS sync task, the company can ensure a smooth cutover to the new environment.
upvoted 4 times
masetromain
1 year, 7 months ago
A. This option would not work as it would require to change the primary database and also it may cause interruption for the company's customers during the cutover process. B. This option would not work as it would not include Aurora Replica to offload the read workload, this would result in aggregation jobs running on the primary database which can cause the load jobs to fail during heavy loads. D. This option would not work as it would require to use kinesis data stream which may cause performance issues and also it may not be the best fit for this use case. Additionally, using Kinesis Data Firehose would add complexity to the data replication process, and may result in increased latency or data loss.
upvoted 2 times
...
...
zhangyu20000
1 year, 7 months ago
C is correct. need more read replica for aggregation jobs to read data
upvoted 3 times
...
masetromain
1 year, 7 months ago
Selected Answer: B
The correct answer is B. Setting up an Amazon Aurora MySQL database and using AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora will ensure that data is continuously replicated to the new environment with minimal interruption. Moving the aggregation jobs to run against the Aurora MySQL database will ensure that the data is being read from the same database that is being loaded, which will resolve the data loading issue. Setting up collection endpoints behind an Application Load Balancer (ALB) as Amazon EC2 instances in an Auto Scaling group, and disabling the AWS DMS sync task after the cutover from on-premises to AWS, will ensure that the migration occurs without interruptions or changes for the company's customers.
upvoted 2 times
masetromain
1 year, 7 months ago
Answer A is incorrect because it's not necessary to set up an Aurora Replica for the Aurora MySQL database, doing this will introduce additional complexity and cost. Using Amazon RDS Proxy is not necessary for this scenario, and disabling the replication job and restarting the Aurora Replica as the primary instance will cause an interruption to the service. Answer C is incorrect because it's not necessary to set up an Aurora Replica for the Aurora MySQL database, doing this will introduce additional complexity and cost. Using Amazon RDS Proxy is not necessary for this scenario. Answer D is incorrect because it's not necessary to use Amazon Kinesis data stream and Firehose to replicate the data when AWS DMS can be used to perform continuous data replication. Also, disabling the replication job and restarting the Aurora Replica as the primary instance will cause an interruption to the service.
upvoted 1 times
andctygr
1 year, 7 months ago
Dude can u pls stop copy-pasting from chatgpt I am so sick of it. It is not a reliable source. Just stop it for the god sake.
upvoted 13 times
jojom19980
1 year, 6 months ago
hhhhhhhhhh.
upvoted 2 times
...
Jesuisleon
1 year, 3 months ago
Before I read your comments, I thought I was the only one so sick of it :)
upvoted 2 times
...
...
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...